Continuous 

Formative Assessment

Norman Herr, Marten Tippens, Mike Rivas, Virginia Oberholzer Vandergon, Matthew d’Alessio, John Reveles

California State University, Northridge, USA

 

Abstract......................................................................................................................................... 1

Introduction.................................................................................................................................. 1

Methods of Cloud-Based Continuous Formative Assessment......................................................... 6

Assessing understanding using cloud-based spreadsheets (quick-writes)...................................... 7

Assessing conceptual development using cloud-based presentations.......................................... 9

Assessing inquiry learning using cloud-based graphing of data................................................ 11

Assessing reading and writing skills using cloud-based documents........................................... 12

The Scan & Post technique..................................................................................................... 13

Assessing visual learning using cloud-based photo/movie albums............................................ 15

Research Findings, Solutions and Recommendations.................................................................. 17

Future Research Directions.......................................................................................................... 18

Conclusion................................................................................................................................. 19

References................................................................................................................................... 20

CLICK HERE FOR GRAPHICS 

Abstract

Continuous Formative Assessment (CFA) is a strategy that employs free and accessible collaborative cloud-based technologies to collect, stream, and archive evidence of student knowledge, reasoning, and understanding during STEM lessons, so that instructors and students can make evidence-based decisions for adjusting lessons to optimize learning. Writing samples, diagrams, equations, drawings, photos, and movies are collected from all students and archived in cloud-based databases so that instructors can assess student understanding during instruction, and monitor learning gains over time. This chapter introduces and explains CFA techniques and provides preliminary research pertaining to the effectiveness of CFA instructional strategies in promoting student accountability, metacognition, and engagement in STEM courses, and suggests avenues for future research.

Keywords: Learning Outcomes, Engagement, Metacognition, Accountability, Collaboration, Evaluation, Online Teaching Strategies, Science Education, Inquiry Learning, Scientific Investigations

 

Introduction

 

The development and availability of new technologies has repeatedly transformed teaching and learning. The chalkboard (c.1890), pencil (c.1900), film projector (c. 1925), radio (c.1925), overhead projector (c. 1930), ballpoint pen (c. 1940), mimeograph (c.1940), videotape (c. 1951), educational television (c. 1958), photocopier (c. 1959), Scantron® (c. 1972), personal computer (c. 1980), graphing calculator (c. 1985), interactive whiteboard (c. 1999), and iPad (c. 2010) are just a few of the myriad of technological innovations that have had a profound influence on teaching and learning (Dunn, 2011). Puentedura (2009) has proposed the SAMR (substitution, augmentation, modification, and redefinition) model to gauge the influence of new technologies on teaching and learning. While some technologies simply substitute for, or augment traditional strategies and modes of instruction and learning, others have the potential to transform learning experiences through modification and redefinition or learning activities.  The advent of cloud-based computing has the potential not only to modify, but also redefine a variety of educational activities, not the least of which is formative assessment. This chapter introduces Continuous Formative Assessment (CFA), a potentially transformative strategy that employs free and accessible collaborative cloud-based technologies to collect, stream, and archive evidence of student knowledge, reasoning, and understanding during STEM lessons, so that instructors and students can make evidence-based decisions for adjusting lessons to optimize learning.

 

Summative Assessment - Today’s educators, students, and educational institutions live in a culture of assessment, being accountable for their work to their clientele, parents, governmental institutions, and the general public.  Assessment is the process of gathering and evaluating information to develop an understanding of what students understand and can do with their knowledge as a result of their educational experiences. Assessments provide data by which we can evaluate the competency of students, educators, and educational systems.  Teachers, schools, districts, states, and national organizations develop assessment tools to evaluate student learning and the effectiveness of the educational process so that decisions can be made regarding the placement and promotion of students as well as the effectiveness of educational programs (Huba & Freed, 2000; Jago, 2009).  

 

Standardized assessments have been instituted in educational systems around the world and have been used to compare the effectiveness of education not only within schools, but also within and between states, provinces, and countries.  Major federal legislation, such as the United States’ No Child Left Behind Act of 2001, have substantially increased assessment requirements and set accountability standards for schools, districts, and states with measurable adequate yearly progress objectives for all students (Guilfoyle, 2006; Linn, Baker & Betebenner, 2002).  The standards and accountability movement has grown dramatically in many countries, and has recently been expressed in the US through the wide-spread adoption of the Next Generation Science Standards (NGSS) as well as the Core Standards in English and mathematics.  These standards have been designed to provide all students with an internationally benchmarked education that is evaluated through periodic assessments (Common Core State Standards Initiative, 2010a, b ; NGSS, 2013; Porter, McMaken, Hwang & Yang, 2011).

 

Former US Secretary of Education Margaret Spellings often stated, “What gets measured gets done,” reflecting the standards-based logic that has undergirded the development of NCLB, Common Core, and NGSS (Guifolye, 2006, p. 8). When politicians and community activists call for educational accountability, they are generally referring to summative assessments that provide information on what students can do as a result of their educational experiences.  Such assessments are “summative” in that they provide a summary of how students, teachers, and institutions have performed. 

 

Formative Assessment - Although growing demands for educational accountability have produced a wealth of literature, legislation, initiatives, reforms, and professional development, the vast majority has focused on assessment of learning (summative assessment) rather than assessment for learning (formative assessment).  Although summative assessment is invaluable in providing data regarding student learning and the effectiveness of educational programs, it does little to shape teaching and learning during instruction.  Summative assessments, such as tests, reports, papers, and quizzes, provide valuable information regarding what students have learned, but formative assessments are needed to provide critical information to optimize learning duringinstruction. Simply stated, formative assessment is the process used by both teachers and students to identify and respond to student learning in order to enhance learning during instruction (Popham, 2008).  What makes formative assessment ‘formative’ is that it is immediately used to adapt instruction to meet the needs of the learners in real-time (Shepard, 2005).

 

Ramaprasad (1983) set forth three key processes in learning and teaching that have helped provide a foundation for a theory on formative assessment: establishing where learners are in their learning; determining where they need to be going, and deciding what needs to be done to get them there.  In their work to develop a theory of formative assessment, Black and Wiliam built upon Ramaprasad’s basic ideas to provide the following definition of formative assessment:

 

“Practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited.” (Black & Wiliam, 2009, p.9). 

 

Furthermore, they identified five key strategies in formative assessment: 

 

“(1) Clarifying and sharing learning intentions and criteria for success; (2) Engineering effective classroom discussions and other learning tasks that elicit evidence of student understanding; (3) Providing feedback that moves learners forward; (4) Activating students as instructional resources for one another; and (5) Activating students as the owners of their own learning.” (Black & Wiliam, 2009, p.7). 

 

 

Traditional Formative Assessment (TFA) - Before introducing CFA, it is important to review methods of traditional formative assessment (TFA) that have been used, and are being used, in classrooms around the world to help teachers identify learning needs and help students develop a understanding of their own strengths and weaknesses, providing evidence that will help them take greater responsibility for their own learning. There are a variety of methods of traditional formative assessment including calling on specific individuals, asking questions of the entire class, inviting coral responses, requiring journal entries, exit tickets, one-minute essays, and “pop” quizzes. There is much research to show that traditional formative assessments can be used to improve student learning success, because well-designed formative assessments provide information to make instructional modifications in real-time to address student needs (Majerichb, Bernackic, Varnum, & Ducettee, 2011; Fluckiger, Vigil, Pasco, & Danielson, 2010; Jahan, Shaikh, Norrish, Siddqi, & Qasim, 2013; Youssef, 2012; Black & Wiliam, 2009; Shepard, 2005).   

 

Formative assessments have been shown to be particularly valuable with lower performing students because learning deficiencies can be identified early in the learning cycle, providing instructors with the information they need to make teaching modifications to help these students before they get discouraged and become disengaged (Athanases & Achinstein, 2003).  In addition to providing instructors with information on student understanding, formative assessment provides learners with information that can be used to self-regulate their learning. Nicol and Macfarlane-Dick (2006) argued that formative assessment is key to self-regulated learning. 

 

Limitations of Traditional Formative Assessment (TFA) - Perhaps the most common formative assessment technique is asking questions of the class during instruction.  Teachers may gauge the level of understanding by a show of hands. For example, an instructor might ask students for the answer to an algebraic equation, and while many students may raise their hands, the instructor obtains accurate data only on those students who are called upon to verbalize their answers.  Unfortunately, the instructor can form erroneous perceptions of the level of understanding of the class based upon small and isolated samples. Forming an assessment of the entire class based upon such limited data may lead to an inaccurate picture of student understanding.  (Great Schools Partnership, 2015).

 

Because it is impossible to call on each student every time, teachers have long realized the limitations of classroom questioning techniques and have adopted other approaches to provide formative assessment data.  In an effort to obtain more inclusive classroom responses, many teachers require their students to submit “exit tickets” as they leave class. This technique provides the instructor with written responses to prompts about a lesson the students just received. Although the exit ticket affords the opportunity to assess the understanding of the whole class, it delivers such data too late for the instructor to adjust instruction during that class period.  For example, exit tickets may show the instructor that the majority of the students did not understand the first half of the lesson, which subsequently led to confusion in the second half.  If, however, the instructor collected data on student understanding during instruction, he or she could adjust the lesson throughout the class session to address immediate needs and thereby deliver a much more effective lesson. Other traditional formative assessment techniques such as notebook checks, one-minute essays and individual desk-to-desk checks share similar limitations by providing either too little data, or providing it too late to adjust real-time instruction.

 

Another form of traditional formative assessment is the modeling methodology that is promoted for use in physics instruction (Hestenes, 1987; Wells, Hestenes, & Swackhamer, 1995). With this method, students construct scientific diagrams (models) to describe, explain, predict, and control physical phenomena.  Students share their models with the teacher and the class by drawing them on small white-boards that they share with the class following teacher prompts.  Although such methods allow the teacher and the entire class to view models of all students immediately, and thereby provide a better assessment of the understanding of the entire class, the detail in such diagrams and drawings is often too small to be seen from across the room, and the information is available only until it is erased to answer the next prompt. A record of the learning activity is lost when the boards are erased which removes the opportunity to see how students’ understanding has changed over time. This type of formative assessment puts a large demand on the teacher to interpret and synthesize data from dozens of students in a very limited time.

 

There is much data on the value of traditional forms of formative assessment in promoting adaptive instruction and self-regulated learning, yet it is clear that there is a need for formative assessment techniques that gather information on the entire student clientele during instruction so that changes can be made when they are most needed.  In addition, there is a need for such formative assessment techniques to provide information that can be used to track student performance over the course of a semester as well as to compare the understanding of students within and between classes. (Nicol & Macfarlane-Dick, 2006).

 

The Need for Improved Formative Assessment - A number of national reports have warned that America faces a shortage of college graduates adequately prepared to work in STEM–related fields (National Academy of Sciences, 2006; National Commission on Mathematics and Science Teaching for the 21st Century, 2000).  One of the reasons for this purported shortage is the high transfer rate of college students out of STEM-related majors to non-STEM related fields.  In a study by the Higher Education Research Institute (2012), it was found that more than 47% of science majors and 63% of mathematics majors switched out of their majors before graduating. Such exit rates are substantially higher than exit rates for social science, history, fine arts and English majors, who exited their majors at rates between 15% and 35% (Higher Education Research Institute, 2012). In search of a cause for this excessive emigration from STEM subjects, Hewitt and Seymour performed a three-year longitudinal study of 335 students attending seven different four-year institutions across the nation. The authors stated that  “The experience of conceptual difficulty at particular points in particular classes, which might not constitute an insuperable barrier to progress if addressed in a timely way, commonly sets in motion a downward spiral of falling confidence, reduced class attendance, falling grades, and despair –leading to exit from the major” (Hewitt & Seymour, 1997, p. 35). 

 

Because the curriculum in most STEM-related courses is progressive in nature, students may have excessive difficulty understanding tomorrow’s lesson if they do not master the concepts from today’s lesson, and this may set in motion the “downward spiral” of which Hewitt and Seymour speak.  If, however, instructors are able to accurately assess the learning of all students during instruction, they may be better prepared to identify and address learning difficulties in real time before the “downward spiral” begins, and reduce the hemorrhaging of students from STEM-related majors. 

 

SRS-based Continuous Formative Assessment- Although educators have long-realized the limitations of TFA, it wasn’t until the 1970’s that digital solutions were developed to address these needs, when William Simmons and Theodore Gordon designed the first electronic audience response system (Simmons, 1988).  In recent years, engineers and programmers have introduced a wide variety of student response systems (SRS) for use in educational settings, including dedicated “clickers,” computer software, and smart phone apps that aggregate student inputs (Kay & LeSage, 2009).  Such systems have the ability to track individual responses, display polling results, document student understanding of key points, and gather data for reporting and analysis during the instructional process. These hand-held dedicated systems allow students to input responses to questions posed by their instructor, and provide immediate statistics on student performance on true-false, multiple-choice and short-answer questions.  Studies have documented improvements in student participation, attendance, and learning with the use of such systems (Beatty & Gerace, 2009; Bennett & Cunningham, 2009; Gok, 2011). Researchers have noted that SRS also increase student accountability as participants realize that their contributions are monitored by their instructors throughout the course of instruction (Kaleta, 2007).  Although SRS’s have been shown to be a valuable formative assessment tool, current systems have limited input capabilities and cannot receive the wide range of complex text, audio, video, or graphic responses necessary to assess higher levels of understanding. In addition, most current student response systems require dedicated devices and require that assessments be prepared in advance of instruction, severely limiting the spontaneity needed to adapt instruction to immediate class needs. 

 

Cloud-based Continuous Formative Assessment – While traditional formative assessment (TFA) techniques gather information that can be used to reform and optimize both teaching and learning during instruction, the data they provide may be either too limited in scope to gain an accurate picture of capabilities and understandings of the entire class, too late to allow for changes in instruction when they are needed, or too short-lived to be of value for teacher research and reflection on best practices and student metacognition of their learning.  Fortunately, the advent of collaborative cloud-based document technologies (e.g. Google Forms, Sheets, Docs, Slides, Drawings), combined with the growth of Wi-Fi and phone/data networks, and the prevalence of “smart phones,” tablets, and laptops, provides instructors the opportunity to perform formative assessments that instantly and simultaneously collect data from all students in a format that can be reviewed in real time and stored for future reference.

 

Cloud-based Continuous Formative Assessment (CFA) is a strategy that employs free and accessible collaborative cloud-based technologies, to collect, stream, and archive evidence of student knowledge, reasoning, and understanding during STEM lessons, so that instructors can make evidence-based decisions for adjusting lessons to optimize learning. Writing samples, diagrams, equations, drawings, photos, and movies are collected from all students and archived in cloud-based databases so that instructors can assess student understanding during instruction, and monitor learning gains over time. 

 

Cloud computing allows application software to be accessed using internet-enabled devices, and student-generated data to be stored in off-site servers that are freely accessible by teachers and students, regardless of their geographical location.  In this chapter we focus on the use of free-cloud based applications that can be run using modern web browsers on laptop computers, desktop computers, tablets, and smart phones.  There are numerous advantages to using collaborative cloud-based resources for CFA as opposed to using SRS or local server-based resources.  Cloud-based resources are maintained by offsite professionals, reducing the need dedicated hardware and local IT support.  Cloud-based resources are accessible by educators and students around the world, providing a consistent interface that promotes the sharing of resources. The strategies for CFA presented in this chapter have been designed and tested by both secondary school and college educators and students (Tippens, 2015; Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015), and it is the opinion of the authors that they hold significant potential for promoting continuous formative assessment in STEM classrooms at both of these levels.

 

Using collaborative cloud-based technologies, instructors can instantly collect and analyze large sets of data from multiple students, groups, and class sections with speed and accuracy, regardless of the physical location of students, providing the opportunity for continuous formative assessment.  Instructors can examine the responses of individual students, or trends and patterns in the data using automated graphs and charts (Herr, Rivas, Foley, Vandergon, & Simila, 2011b; Herr, & Tippens, 2013; Herr & Rivas, 2014).  This chapter will introduce a range of collaborative cloud-based formative assessment techniques that enable educators to continuously monitor student ideas and adjust their instructional practice to enhance student learning.  In addition, this chapter will provide an analysis of the perceived effectiveness of such techniques by pre-service and in-service science, technology, engineering, and math (STEM) teachers in K-20 classrooms. 

 

In developing a theory of formative assessment, Black and Wiliam (2009) provided a definition and strategies that identify formative assessment as a process in which teachers, learners, and their peers work together to activate and optimize learning. The goal of this current chapter is to illustrate how collaborative cloud-based technologies can be employed to promote formative assessment in STEM classrooms through the strategies of CFA, so that teachers, learners and their peers have the information necessary to work together to activate and optimize learning. As expressed in The Handbook of Formative Assessment, “Addressing the challenges and embracing the potential power of formative assessment offers substantial promise for stimulating greater gains in students’ achievement and responsibility for their learning.” (Andrde & Cizek, 2010, p. 15).  In this chapter we provide new strategies of formative assessment using collaborative cloud-based tools, provide initial research on their effectiveness, and set forth questions for future research to investigate the potential for CFA in stimulating greater student achievement and responsibility for learning.

 

 

Methods of Cloud-Based Continuous Formative Assessment

 

The following are hallmarks of CFA: (1) All students participate in all assessments, promoting continuous engagement; (2) student contributions are shared with the entire class, promoting a culture of accountability; (3) assessment is flexible, allowing for spontaneous assessment; and (4) assessments are stored in the cloud for a permanent record of learning, promoting a reflective learning environment. (Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015).  Later in this chapter we will discuss specific methods of CFA, and show how this cloud-based approach to formative assessment can improve learning outcomes in STEM-related classes. In addition, we will discuss research related to the effectiveness of CFA in enhancing student engagement, metacognition, accountability for learning, and collaboration.

 

According to the Pew Research Internet Project, 98% of all Americans in the 18-29 year-old cohort owned cell phones as of January 2014, and 84% owned smart phones (Pew Research Center, 2014, p. 1).  In addition, 42% of all American adults owned computer tablets (p. 1).  These percentages are undoubtedly higher as of the writing of this paper, and are expected to continue to rise in the years ahead. CFA capitalizes on the abundance of student-owned mobile computing devices, commercial cell-phone networks, and institution-owned computer labs and Wi-Fi networks to engage and monitor learners in STEM classrooms.  Cloud computing allows students and teachers to share a variety of data in a collaborative online environment that enables formative assessment. 

 

Docendo discimus! By teaching, we learn!  Using cloud-based collaborative resources, students have the potential to be better equipped to teach others, and thereby learn the concepts more deeply themselves.  The instructor can rapidly scan the work that has been submitted, and then ask specific students to share their work with the class.  Students then describe their work as the rest of the class watches on the big screen or on their individual devices. Rather than randomly calling on students, and embarrassing them when they don’t know the answer to a question, the instructor pre-screens work and calls on only those students who have demonstrated understanding.  Students realize that they are being called upon because the instructor believes that they have something significant to share with their peers.  They are therefore more likely to be confident in what they share, and their peers are more likely to listen to what they say. (Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015). The astute educator will ensure that in due course, each student has the opportunity to share their work, thereby building the confidence of all students and giving them the opportunity to learn by teaching.  In the section which follows, we discuss five CFA techniques, illustrating how instructors can assess understanding using cloud-based spreadsheets (quick-writes), assess conceptual development using cloud-based presentations, assess inquiry learning using cloud-based graphing of data, assess reading and writing skills using cloud-based documents, assess problem-solving skills using the scan & post technique, and asses visual learning using cloud-based photo/movie albums.

 

Assessing understanding using cloud-based spreadsheets (quick-writes)

 

The CFA model has been made possible by the development of free collaborative web-based spreadsheets, documents, presentations, and drawings (Herr & Rivas, 2014; Herr, Rivas, d’Alessio &Vandergon, 2012; Foley & Reveles, 2014).  We shall first discuss the use of collaborative spreadsheets to collect student “quick-writes,” which are text-based responses to teacher prompts.  A cloud-based “quick-write” is a collaborative spreadsheet in which data is arranged in rows and columns.  Each cell can contain data or calculations based on the data in other cells. Google® Sheets, Zoho® Sheets, and Microsoft® Excel Web Applications are popular, free, cloud-based collaborative spreadsheets that serve well for this purpose.  Throughout this chapter we will illustrate using examples from the Google Docs suite of web-based applications. Although these collaborative cloud-based resources were developed primarily for business applications, they afford potential benefits for formative assessment in STEM classrooms.  (Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015)

 

To develop a “quick-write,” the instructor simply creates a cloud-based spreadsheet and shares editing privileges with his or her students.  Sharing privileges can be granted to anyone who has the link, or to specific individuals on the basis of email invitations.  The benefit of the first option is that no log-in is required to make contributions, but the downside of this approach is that all contributions are made anonymously and it is impossible for the instructor to confirm that contributions are made by specific individuals.  If, however, permission is granted only to certain individuals by sending permissions through email, then it is possible to know who is making which contributions.   The following steps should be followed once the spreadsheet is created. (Herr, Rivas, Foley, Vandergon, d'Alessio, Simila, Nguyen-Graff, & Postma, 2012).

 

Provide column headers for the first row.  The first two cells should be titled “first name” and “last name”, or code numbers if anonymity is desired.  All additional columns will be named to reflect the prompt.  For example, if the first question to be asked is  “What is your email address,” then the column headers should be titled “email address.”  The first row should then be “frozen.”  This simply means that the row is separated from the remainder of the sheet and can be used to sort subsequent data.  For example, when clicking on the cell titled “first name,” the user will see the opportunity to sort the column alphabetically. The students or teacher can then be tasked with filling in the remainder of the name, and when all names are entered, the rows can be sorted alphabetically by either the first or last names.  

 

Once student names or code numbers are entered and alphabetized, teachers can provide prompts to which students simultaneously respond on the same document.  It is generally best to create new columns immediately following the name columns.  If students are instructed to always enter data in this column, then the most recent data will always appear adjacent to the names of the students who inputted it.  For example, if a student’s first name appears in column A, and their last name appears in column B, then their most recent response will be in the adjacent column, column C.  Whenever a new column is created, all of the data from previous entries is shifted to the right. An alternative to this is to “hide” the columns after each day’s response and therefore when students respond they are still answering in the cell that is adjacent to their name or code (though the column heading may be M instead of C for example). This helps keep the spreadsheet cleaner looking and still provides the instructor with access to all of the student’s responses from the semester.  For example, teachers may enter student names or codes in columns A & B and pose a question in the header of column C.  The cells in column C become highlighted when students start to enter their responses, providing the teacher with information regarding which students are composing answers and which need more time.  Once the teacher has determined that there has been a sufficient response, he or she asks students to press the “enter” key, and instantly the cells are populated with student responses.  Color-coding and rollover names or codes identify those who have made contributions and deter students from entering data in cells other than their own. 

 

The following example from a physiology class illustrates how the “quick-write” can be used not only to collect and summarize class data for analysis, but also how it can be used for formative assessment by the instructor.  Figure 1 shows a sample “quick-write” in which a physiology class investigated variations in the frequency-sensitivity of human hearing.  The instructor presented 100db pulses of sound at various frequencies, and directed students to enter a “1” in the appropriate student/frequency cell if they heard a tone from a sound generator, and a “0” if they did not.  The spreadsheet was designed so that a graph of student responses was automatically and continuously generated adjacent to student data (Figure 1). The physiology students used this data to describe trends and variations in frequency sensitivity within the population, while their instructor simultaneously used this data to determine which students were confused, on-task, off-task, slow to input, or by examining the responses, or lack of responses, to prompts (see notes on the right side of Figure 1). 

 

As students enter their responses, teachers scan the developing response table to assess student understanding and adjust instruction accordingly.  For example, if few students provide an adequate written response, a teacher may pose a new question in a simpler format such as multiple-choice.  By programming the spreadsheet appropriately, the teacher obtains statistical data to indicate the percentages of students that understand or have specific misconceptions. The teacher opens a new worksheet for each day and tracks student performance and understanding by tabbing through worksheets from previous lessons. 

 

Professors who have used the CFA model in teacher preparation programs report greater student engagement in lessons and greater personal satisfaction with assessments of student progress (Tippens, 2015; d’Alessio & Lundquist, 2013; d’Alessio 2014; Foley & Reveles, 2014).  Bandura (1997) and Zimmerman (2002) suggested that formative assessments permit students to express themselves and develop a sense of self-efficacy, a key requirement for the development of autonomous learning strategies. Polanyi (1967) and Schön (1987) emphasized the formative and reflective purpose of student discourse and encourage an open community of learners where ideas and opinions are exchanged so that students can co-construct their understanding. The CFA model provides an environment where such discourse can take place, but unlike a traditional science classroom where often certain students dominate, all students are on an equal footing since all have access to the same document for submitting their contributions. In addition, this form of assessment allows students that struggle with language issues (i.e. English Language Learners) time to digest or translate the question and then answer it. Many of these students do not respond in a TFA classroom (d’Alessio 2014; Foley & Reveles, 2014).

 

Assessing conceptual development using cloud-based presentations

 

One of the most widely used computer-based teaching tools in education is PowerPoint®, the immensely popular slide show presentation software developed by the Microsoft Corporation. By 2012, it was estimated that PowerPoint was installed on 1 billion computers worldwide and used an estimated 350 times per second (Parks, 2012, paragraph 1).  Despite its apparent popularity, the value of such computer-based slideshow software in education has been the subject of debate for many years, and some studies have shown a lack of evidence that PowerPoint is more effective in achieving learner retention than traditional presentation methods (Savoy, Procotor, & Salvendy, 2009).  Indeed, some have argued that PowerPoint presentations often stifle engagement and student input, and may serve as an impediment to learning. “Death by PowerPoint” has not only been the title of two books, but has also entered the collegiate lexicon to describe the disengagement experienced by many students (Kerr, 2001; Flocker, 2006).   The following excerpt from an opinion piece in a university student newspaper describes the potential downsides of injudicious use of PowerPoint in education:

 

“I know it seems like a cool technology at first glance. No writing on the board, no flipping overhead transparencies around until they read correctly. In practice, it turns out to be just one more thing standing in the way of teachers engaging students in meaningful learning. Here is how it goes: The lights dim, the teacher fools around with the laptop for a while, and then the show begins. The first slide pops up with a bunch of text, sometimes with cool animated transformation effects. The instructor either reads the text or reads it and elaborates on it a little. Then they try to move on to the next slide, saying "It isn't important to copy down the whole thing, just get the gist of it," but there are always those obsessive-compulsive students who can't help themselves. They just have to copy every word, so they beg for the teacher to leave the slide up longer, writing furiously, while everyone else sits there, bored…. I long for the old days when the instructor would write on the board. Then they would at least move around, write some things large, draw diagrams, underline for emphasis and look around the classroom as they talked. At times, they would even call on students! Interacting with students? How crazy is that?” (as cited in Davis, 2004, p. 1) 

 

Comments like these show the potential weaknesses of the use of slideshow software in instruction.  Teachers and professors often spend hours preparing lessons before class, and then present their work to their students.  Lessons can tend to be one-way exchanges in which students dutifully transcribe the preconceived ideas expressed in the slideshow.  Because the lesson is pre-prepared, it is challenging to incorporate student ideas and input.  As a result, learners may tend to be observers rather than participants.  While instructors may entertain student ideas, it is difficult to incorporate them spontaneously into the presentation, particularly if the slideshow is to be used for additional class sessions.  Although the overuse of traditional slideshow software may lead to passive, un-engaging lessons, the use of cloud-based, collaborative presentation software introduces the possibility of engaging all students during instruction while simultaneously benefiting from the visual clarity offered by presentation software.  Using cloud-based collaborative slideshow software, students can become active participants in the lesson as they can all simultaneously edit the same presentation.  The instructor can easily observe the work of all students by scanning the slide thumbnails, and can quickly ascertain which students are having difficulty with various concepts (Herr & Tippens, 2013; Herr et al., 2015).  Collaborative cloud-based slideshows can be used to quickly, formatively assess student understanding of specific concepts or to crowd-source learners to develop presentations that surpass the quality and/or length of comparable presentations made by individuals or groups.  In the following paragraphs we shall discuss the techniques and potential benefits of each approach.

 

Formative assessment of specific concepts 

 Although collaborative cloud-based spreadsheet applications (quick-writes) work well for collecting data in textural form, they are insufficient for assessing non-text based skills or understanding.  Fortunately, collaborative cloud-based presentation software includes collaborative drawing tools that allow many users to simultaneously manipulate drawings.  To benefit from these capabilities, the instructor creates template pages and duplicates them for as many students or student groups as necessary.  For example, if a chemistry instructor wants to assess student understanding of the differences between atoms, ions and isotopes of the same element, he or she can construct simple Bohr model diagrams such as are seen in Figure 2.  Each slide contains a circle for the nucleus, with concentric rings representing the principle quantum numbers.  In addition, the slide is equipped with stacks of duplicated plusses (representing protons), minuses (representing electrons), and circles (representing neutrons).  Once the slide template is created, the instructor merely duplicates the slide, and then titles each slide with symbols for specific atoms, ions and isotopes.  Each student or group is then assigned to a separate slide and instructed to drag the symbols for the protons, neutrons and electrons to the appropriate locations to produce the Bohr diagrams for the element they have been assigned. Since all students are working on the same slide show, but with different elements on different slides, the instructor can quickly scan the presentation to assess student understanding.  If desired, the instructor may also direct students to review the work of their peers. Again this type of assessment helps students with language issues as they often can “draw” out their understanding much easier than with written words and appropriate academic vocabulary.  Although it is desirable that students develop skill with digital drawing tools, the required skill set can be minimized if the instructor prepares templates that simply require students to drag and drop specific features.

 

Collaborative presentation development

 It is also possible to formatively assess student understanding while collaboratively constructing a presentation or slide show.  For example, if a STEM instructor wants his or her students to understand the significance of Latin and Greek root words in the construction of scientific, mathematical or technological terms, he or she can develop a master slide such as the first one shown in Figure 3.  The slide includes a title (the scientific term), a photo that illustrates the term, and text illustrating the word roots, their meanings, and examples of other scientific terms that use these roots.  Once the instructor creates the master slide template, he or she simply duplicates it for as many students or groups as necessary, and assigns each a particular slide number.  The students then replace the information on their slide with information relative to the scientific term that they have been assigned or selected. This example again encourages students struggling with the academic language a chance to match visual understanding with appropriate words. Unlike many student projects, collaborative presentation development can be completed within minutes, and students can learn from each other as the presentation is taking shape.  Students who do not fully understand the task can monitor the work of their peers to get a better understanding of what is to be done.  When the slide show is completed, each student can simply make a copy into their own account and use it to show their parents and others what they have done, or to use it as a study guide in preparation for summative assessments.  In addition, the slide show can be shown in class and each student can be given the responsibility of explaining their slide.  Because all work is done on a collaborative, cloud-based slide show, the instructor can formatively assess all of his or her students and provide guidance or feedback as necessary.  The instructor either goes to the desks of those students who need additional help, or simply collaborates with them on their slides to guide them in their learning. 

 

An additional example may help illustrate the power of collaborative cloud-based presentation development as a formative assessment tool. Figure 4 illustrates shows the first few slides in a slideshow on climate and biomes.  The instructor prepares a master slide that includes the title of a particular biome, a photo representing the type of vegetation one might see in that biome, and a climograph showing trends in monthly precipitation and temperature.  The teacher then replicates the slide for each student, and substitutes climographs for each of the major biomes.  Each student must then analyze the climograph on their slide and determine the biome it represents, and change both the title and the photograph.  The instructor can scan student work as they do it and assign students who have accomplished the task successfully to help those who are struggling.  In so doing, all students benefit, either from teaching or from or learning from their peers. The variety of collaborative cloud-based slide shows is limited only by the creativity of the teachers and students involved (Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015).

 

Assessing inquiry learning using cloud-based graphing of data

 

Inquiry-based learning is a constructivist-inspired pedagogy that developed during the discovery learning movement of the 1960s.  Inquiry learning starts by posing questions or problems that need to be solved, rather than presenting the discoveries and conclusions of others.  Although inquiry-based learning in STEM education has been encouraged for more than half a century, many instructors choose not to use it for a variety of practical reasons. Some of the greatest deterrents to the use of inquiry learning in the science classroom are problems associated with assessment and classroom management (Quigley, Marshall, Deaton, Cook & Padilla, 2011).   Unlike traditional direct instruction that presents the findings of others in a coherent, codified, and uniform manner, inquiry-based instruction encourages students to discover answers through personal investigation, but the answers novice learners derive are often incomplete, inaccurate, or misleading.  While it is relatively easy to assess student learning in direct content-based instruction, it is often difficult to assess learning in an inquiry-based learning environment.  Despite such difficulties, inquiry-based learning in STEM is being strongly encouraged by many educators and educational agencies around the world.   For instance, the Next Generation Science Standards and the Common Core State Standards encourage inquiry-based learning in the teaching of K-12 science, mathematics and engineering (Common Core State Standards Initiative 2010; NGSS Lead States, 2013).

 

Although there are numerous challenges to the implementation of inquiry-based learning, CFA techniques can be employed to gauge student understanding and progress during investigations.  Equipped with such data, instructors can provide timely feedback to ensure that students remain on task and engage in productive investigations (d’Alessio & Lundquist, 2013). Here we present an example to illustrate the effective use of cloud-based formative assessment to promote effective inquiry learning, and the reader is encouraged to look in the collaborative data analysis chapter of this book which includes more examples demonstrating not only the assessment aspect of such exercises but also the scientific processes required when doing inquiry-based science. 

 

In the first example, a science teacher desires his or her students to investigate the conditions necessary for rotational equilibrium.  To simplify calculations, the teacher provides students with meter sticks, free-standing fulcrums, masses, and hangers for the masses, and challenges students to determine the conditions required for rotational equilibrium. To accomplish this task, students hang masses of any value at any length on the left and right sides of the fulcrum such that their meter stick is balanced and rotational equilibrium is achieved.  Students then record the mass and distance to the fulcrum on the right side as well as on the left, and enter them into a cloud-based form which records the data in a common cloud-based spreadsheet (Figure 5).  The instructor has configured the spreadsheet so that the clockwise torque (the product of mass and distance of the mass on the right to the fulcrum) is plotted in a scatter plot against the counterclockwise torque (the product of mass and distance of the mass on the left to the fulcrum).  Each time a student or laboratory group records their results, a point is plotted on the graph.  The graph develops rapidly as groups report their findings.  Figure 5 shows 36 data points collected by 15 lab groups.  As students observe the graph, they note that the clockwise and counterclockwise torques are equal and opposite in value, and with minimal additional guidance, come to conclude that objects whose torques are balanced will persist in a state of rotational equilibrium. 

 

The savvy instructor monitors the graph as students report their data, and quickly identifies those groups who have reported their data incorrectly.  For example, the laboratory group who reported the outlier in Figure 5 simply had the decimal in the wrong place.   Previously, other students had accidentally reported mass in the distance cell on the form or distance in the mass cell, but their errors became obvious when plotted, and the instructor was able to show them the correct way to enter their data.  Such cloud-based collaborative investigations offer a variety benefits.  Firstly, they allow the instructor to monitor student input and provide guidance as necessary to make sure that errors in technique or reporting are caught early and corrected. (d’Alessio & Lundquist, 2013). Secondly, since each student contributes to a common database, investigations can be completed much more quickly, providing additional time for analysis, discussion, and additional investigations (Foley & Reveles, 2014).  Thirdly, the data is kept in a common cloud-based location where all students can access it for their lab reports (Foley & Reveles, 2014).  Fourthly, students learn that modern scientific and engineering investigation requires collaboration with other scientists and engineers (Herr & Rivas, 2010b; Herr & Rivas, 2014b).

 

Assessing reading and writing skills using cloud-based documents

 

Peer editing has long been promoted as a valuable tool for improving student writing, yet numerous critics challenge such assumptions.  Jesnek  (2011) went as far as to claim that “Since no discernable solution has immerged in over fifty years, it is time to finally dispel the illusion that peer editing guarantees better college writers” (p.17).  Jesnek commented that “the practice of peer editing is often inhibited by several other factors: time constraints, social graces, off-task talk, and the actual ability of writer and editor, not to mention the endlessly variable ways of creating (or not creating) peer editing rubrics” (2011, p.17 ).  Prior to the advent of collaborative cloud-based documents, students had to perform their peer edits in private settings in which no one could see their remarks until papers were exchanged.  By contrast, when students critique each other on cloud-based documents, their comments are instantly available to whomever the document is shared, including the instructor.  The instructor can formatively assess the quality of the editorial comments, and provide immediate feedback using the associated chat features.  Using collaborative, cloud-based documents, instructors can address deficiencies in editing techniques and provide immediate instruction regarding steps for improvement.  

 

Collaborative cloud-based documents provide an opportunity for students to co-construct chapter summaries, lab reports, and related documents under the watchful eye of the instructor.  The instructor can spot errors in protocol, logic or interpretation and alert students before they finalize their reports.  Given this information, students can make necessary adjustments before submitting their documents for summative assessment.   Thus, students can correct small problems before they become big problems, and can avoid the hassle of having to unlearn and relearn material. (d’Alessio & Lundquist, 2013).

 

Collaborative cloud-based documents can also be used to formatively assess critical reading skills.  For example, an instructor can scan the text from a chapter into a collaborative cloud-based document and instruct students to identify key terms or concepts for discussion using the commenting tool.  While students comment, the teacher scans the entire document to observe student progress (Figure 6).  Once again, the instructor can provide real-time feedback using the comment tool or live chat, and can formatively assess student reading and reasoning skills and thereby adjust instruction to ensure better learning. 

 

The Scan & Post technique

A variety of student response systems (SRS) have been developed to overcome the deficiencies associated with traditional formative assessment techniques. Examples include dedicated audience response systems such as Turning Technologies®, iClicker®, and Audience Response Systems®, as well as mobile apps such as Socrative®, Poll Everywhere®, TopHat®, ClickerSchool®, Text the Mob®, Shakespeak®, Naiku Quick Question®, and Edmodo®.  Such systems track individual responses, display results from polls, confirm understanding of key points, and gather data for reporting and analysis. Studies have shown improved student participation, attendance, and engagement with the use of CRS’s (Beatty & Gerace, 2009; Bennett & Cunningham, 2009; Gok, 2011). Such systems not only provide information regarding student understanding, but also increase students’ accountability for their own learning (Akpanudo, Sutherlin, & Sutherlin, 2013; Han & Finkelstein, 2013; Kaleta, 2007).  Although SRS’s have been shown to be a valuable formative assessment tool, current systems do not provide adequate means for collecting spontaneous free-form data such as student diagrams, multi-step solutions, observations and experimental results. Most systems require instructors to create multiple choice and short answer questions prior to instruction and are incapable of soliciting spontaneous free-form responses necessary to assess higher levels of understanding (Price, 2012).  As a result, lessons may tend to be scripted and rigid as teachers adjust their instruction to synchronize with pre-written questions, but such rigidity has been identified as a major deterrent to successful formative assessment (Beatty, Feldman, & Lee, 2011). 

 

To address the limitations and deficiencies of existing formative assessment techniques, the authors propose the scan & post technique that allows instructors to spontaneously collect photographs, scans, and movies of student diagrams, multi-step solutions, observations and experimental results in real-time to make formative assessments of student skills using data that is sufficient, adequate, accurate, timely, and permanent. The scan & post modelemploys student-owned, camera-equipped mobile devices (smart phones, media players, tablet computers) and free cloud-based file synchronization services (e.g. DropBox®, Box®, OneDrive®, Google Drive®, Picasaweb®) to collect such data (Herr & Tippens, 2013). The instructor creates a shared folder “in the cloud” for each class, subfolders for each day, and sub-subfolders for each question, problem, or observation.  Students draw their diagrams and perform solutions on paper, and then photograph or scan them with appropriate mobile apps (e.g. iOS camera apps, CamScanner®, TurboScan®) and upload them to the shared folders in the cloud (Figure 7).  Students can also upload photographs and/or videos of homework, lab-setups, observations, and experimental results.  Within a few moments the instructor sees everyone’s work in a single matrix and can thereby ascertain the level of student understanding so that instruction can be adjusted to meet real-time needs. If desired, the instructor can grant viewing privileges to all students so they can learn from the work of their peers.  Using the scan & post technique, the instructor obtains a permanent cloud-based digital record of all student work, and this allows them to track student progress during instruction as well as over multiple days (Herr & Tippens, 2013).

 

A few examples from university science classrooms will illustrate how the scan & post technique can be employed for continuous formative assessment. It should be noted that the instructor sees a large matrix of all student work, but in the examples that follow we display only single rows of selected matrices to conserve space.  In addition, the instructor can select any image and see it full screen, but in Figures 8-12 we show only thumbnails of the original images.

 

Revealing Alternative Predictions.  (Figure 8).  The scan & post technique can inform instructors of a wider variety of misunderstandings than they might otherwise be aware of. Students were asked to predict what would happen if a 2-liter soda bottle were punctured at three locations while the water level was kept constant by “topping off” the bottle through a funnel (Figure 8). The apparatus was placed on a ring-stand in the front of the room, and students were asked to draw figures illustrating their predictions. Student drawings provided much more information than a show of hands could ever reveal, and highlighted a variety of misunderstandings regarding fluid pressure of which the instructor was previously unaware.  Approximately one fourth of the students drew a diagram similar to A, predicting that there would be no difference in fluid pressure as a result of the depth in the water column.  Another quarter of the class predicted that the water streams would never intersect, as illustrated in B.  Approximately ten percent predicted that no water would flow out, as illustrated in C.  Finally, approximately 40% correctly predicted that water would flow farther with increasing depth as shown in D.  The instructor had performed this activity for more than twenty years, each time asking for a show of hands of those who predicted either option A or D.  It had never crossed his mind that some students would predict options B or C if given the opportunity.  Using the scan & post technique, the instructor realized students held a wider variety of ideas than formerly imagined.  Provided with this information regarding student predictions, the instructor gave a short lesson on water pressure, after which all of the students redrew their diagrams predicting option D.  When the activity was finally performed, students expressed satisfaction that their revised hypotheses correctly predicted what would happen. (Herr & Tippens, 2013).

 

Revealing Misconceptions (Figure 9). University science teacher candidates were asked to draw diagrams to explain the reasons for seasonality on Earth.  The instructor had assumed for many years that the vast majority of his science teacher candidates could explain this basic phenomenon well enough to teach secondary school students.  To his surprise, only a quarter of his students produced diagrams that could be used for instruction.  The scans revealed a series of inadequacies and misconceptionsFigure 9A shows that the teacher candidate had some understanding that the tilt of the Earth’s axis was partially responsible for seasonality, but their diagram was so sketchy and incomplete that it would be useless for instruction.  Figures 9B and 9C illustrate a widely held misconception that seasons are due to the elliptical orbit of the Earth.  Although the Earth does travel in an elliptical orbit, as do all satellites, its orbit is nearly circular, unlike the highly eccentric orbits shown.  Figure 9C illustrates that the teacher candidate had some understanding that the tilt of the Earth’s axis was involved, but the diagram shows that he or she believed the elliptical nature of the Earth’s orbit was equally important.  Approximately one quarter of the teacher candidates produced correct diagrams such as the one illustrated in Figure 9D.  Upon seeing the thumbnails of all explanations, the instructor realized that he would need to address student misconceptions before he could continue with the lesson.  More importantly, this activity provided an opportunity to illustrate the importance of diagrams during science instruction, as students critiqued each other’s diagrams from the perspective of science learners.  The scan & post technique provides science teachers with valuable information during instruction so that they can adapt their instruction to the needs of their students. (Herr & Tippens, 2013; Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015).

 

Pre-lab Check for Understanding  (Figure 10). The scan & post technique can be used to determine if students are adequately prepared to perform a laboratory experiment. If students do not understand experimental design and set-up, then their laboratory experience will probably be pointless and frustrating.  The scan & post technique allows the instructor to check for understanding before proceeding.  In this activity, students were given instructions regarding a technique for determining the wavelength of light from a laser pointer using a diffraction grating.  The instructor thought that he would need to give a lengthier explanation, but a quick review of student drawings revealed that his students understood the experimental procedure enough to begin the laboratory activity. Continuous formative assessment using the scan & post technique can therefore save valuable class time by determining when students are ready to proceed (Herr & Tippens, 2013).

 

Determining students’ abilities to follow verbal instructions (Figure 11). The scan & post technique can be used to assess students’ understanding of spoken instructions. How do you know if your students understand your verbal instructions?  The scan & post technique allows you to visualize, archive, and display student work. In this activity, students were asked to draw maps based on a series of verbal instructions provided by the professor. Although the drawings showed that these students had relatively good mapping skills, it also revealed some (e.g. B & D) made one or more directional errors that resulted in faulty maps and conclusions.  The scan & post technique can be used to assess student understanding of any set of instructions during a lesson and to identify commonly misunderstood or forgotten steps so that they can be given greater emphasis to avoid misunderstandings.

 

Learning from peers (Figure 12). The scan & post technique provides opportunities for students to learn from each other. Teachers know that one of the best ways to learn something better is to teach it to others. Unfortunately, it is normally impractical for teachers to employ peer-instruction in the classroom, yet the scan & post technique provides an opportunity to do just that. When the instructor gives viewing privileges to all, students can see and examine the contributions of their peers.  Students can thereby teach each other through their written diagrams and explanations.  Students report that they feel much more accountable to their peers when sharing their understandings using continuous formative assessment techniques such as scan & post. (Herr & Tippens, 2013).

 

The scan & post method helps promote a paradigm shift in STEM education towards collaboration and accountability. Students can no longer hide behind their raised hands.  Instead, they must produce diagrams and solutions to illustrate their understanding.  These methods help all students respond and gives students who need a bit longer to process (e.g. students with language issues or learning disabilities) a chance to provide input. These diagrams provide a permanent digital record that can be used by the instructor to gauge growth in student understanding over time. Such diagrams provide benchmarks during instruction so that teachers can determine student skills and understanding before summative assessments are given.  

 

The scan & post technique encourages metacognition as students see their predictions, drawings, and solutions contrasted with those of their peers.  By examining the models of others, they are given the opportunity to reflect on their own thinking. These activities help students gain an understanding that the learning enterprise requires collaboration, independent verification, and peer review (Herr et al., 2013).  The scan & post technique allows instructors to spontaneously collect photographs, scans, and movies of student diagrams, multi-step solutions, observations and experimental results in real-time to make formative assessments of student skills using data that is sufficient, adequate, accurate, timely, and permanent. Teachers are provided real-time snapshots of student understanding that can be used to reform their instruction to meet real, rather than perceived, student needs. (Herr & Rivas, 2014a).

 

Assessing visual learning using cloud-based photo/movie albums

 

A learning style refers to a student’s natural or habitual pattern for acquiring and processing information.  One of the most widely used classifications of learning styles is the VARK model (visual, auditory, reading, kinesthetic) proposed by Fleming & Baume (2006), which asserts that students have preferential learning styles.  It is proposed that students will learn best when there is a “mesh” between the way material is presented and their individual learning style.  For example, a “visual learner” supposedly learns best when material is presented through pictures, movies, diagrams and other visual aids, while a “kinesthetic learner” learns best through hands-on experiences.  Although there is some debate regarding the value of targeting instruction to the “right” learning style, studies suggest that in general, all students benefit from mixed modality presentations, meaning ones that combine visual, auditory, reading, and kinesthetic components (Flemming & Baume, 2006).  

 

Using the scan and post technique, instructors can encourage visual learning by setting up a collaborative album into which students can deposit scans of models and diagrams they have made with pencil and paper.  Sometimes, however, photographs or movies are more telling than diagrams and models.  For example, a math instructor may want their students to demonstrate the relevance of conic sections in engineering, and may instruct his or her students to upload photographs of parabolas or paraboloids found in the real world.  The students subsequently upload images from their mobile devices or computers to a collaborative cloud-based photo album such as shown in Figure 13.  The instructor can rapidly scan student contributions and provide timely feedback so students can correct their thinking while the concepts are fresh in their minds. (Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015).

 

Suppose an instructor wants students to master photographic techniques such as time-lapse, stop-motion animation, or slow-motion so that they can capture and analyze events of scientific interest.  Given such goals, they may elect to use collaborative cloud-based photo/movie albums because they provide the opportunity to collect, share, archive, and assess such work online.  For example, if an instructor wants students to observe bacterial growth, he or she can instruct them to expose an agar-filled Petri dish to the atmosphere for a couple of hours to collect bacterial spores, and then seal it and photograph it each day for a given period of time.  As students upload their photographic chronologies to unique folders within a collaborative cloud-based photo directory, the instructor assesses the quality of their work by reviewing a thumbnail grid of the images such as shown in Figure 14. Assessment is instantaneous and continuous, and there is no need to print, photocopy, or collect papers.  The instructor can quickly review student work for accuracy and creativity before selecting students to share their work with the class.  If it is indeed true that the “best way to learn is to teach,” then such an instructor provides rich learning opportunities in an environment that students consider “safe,” knowing that their work has already been vetted by their teachers.  The incisive instructor will ensure that all students have regular opportunities to share their work.  The collaborative cloud-based photo album is ideal for collecting student movies of various STEM-related events.  The following are a few ideas for student projects that can be done using smart-phone cameras (Tippens, 2015; Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015).

 

Time-lapse movies.  Time-lapse photography is a technique whereby the frequency at which frames are captured is much lower than the rate at which they will be played back. If for example, a process is recorded at two frames per second (2 fps), it will play back fifteen times faster than the real motion when played at a normal playback rate of 30 frames per second (30 fps).  (30 fps / 2 fps = 15).  Time-lapse photography is the opposite of slow motion photography. Time lapse photography helps us see processes that are normally too subtle to notice, such as the apparent movement of the sun and stars or clouds in the sky.  The following are a few ideas of  STEM related processes that can be easily studied and analyzed with the use of time-lapse movies:  the rise of food dye in a white carnation, evaporation for a glass of water under a heat lamp, bread rising, cloud formation, sunset, sunrise, candle burning, freeway traffic, ice formation, ants devouring a cookie, paper chromatography, germination, flower formation,  ant farm, heliotropism in sunflowers, crystal formation, rusting iron, fruit decomposition, etc.  After all time-lapse movies have been uploaded, the teacher can ask students to explain the processes illustrated.

 

Slow-motion movies.  Slow motion is a movie-making effect in which the motion appears to have slowed down when played back. Slow motion effects can be achieved when frames are captured at a rate much faster than it they will be played back, so that when replayed at normal speed, time appears to be moving more slowly.  Movies taken with cell phone cameras are often played back at 30 fps (frames per second).  If, for example, a movie is recorded at 240 fps, it will play back at only 1/8th of the real speed (240 fps/ 30 fps = 1/8).  Slow motion photography is excellent for analyzing events that occur too rapidly for the human eye to catch.  The following are a few examples of STEM-related topics that would be good for videoing in slow motion: collisions, launching a rocket, hitting a baseball, spiking a volleyball, golf swing, diving into a pool, balloon popping, waves crashing, Slinky® falling, etc.  Students upload their slow-motion movies into a collaborative cloud-based folder where the teacher can assess their technique.  After all movies have been uploaded, the teacher can ask students to explain the processes they have captured. (Tippens, 2015; Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015).

 

 

Stop-Motion animation movies.  Stop motion (stop frame) animation is a technique that makes objects appear to move on their own.  The objects are moved in small increments between individual photographs.  The photographs are then put together into a movie to give the appearance of motion. Stop-motion animation is particularly useful in demonstrating processes in math and science.  The student must first think deeply about the process that is to be animated, and then break it down into small steps for the purpose of creating the movie. The following are a few ideas: protein synthesis, mitosis, meiosis, gel electrophoresis, food web, water cycle, mineral cycles, rock cycle, subduction & volcano formation, continental drift, orbits of the planets, propagation of waves, chemical bond formation, etc.  Students upload their animations into a collaborative cloud-based folder where the teacher can assess their understanding of the processes they are animating.  (Tippens, 2015; Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015).

 

 

Research Findings, Solutions and Recommendations

 

To determine the relative effectiveness of CFA vs. TFA we should evaluate both in terms of the definition of formative assessment expressed at the outset of this paper.  In other words, how does CFA compare with TFA with respect to the extent that “evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited.” (Black & Wiliam, 2009, p.9). Further, how effective is cloud-based CFA compared with TFA at measuring student engagement, level of understanding, and misconceptions so that teachers, learners and peers can make decisions about the next steps to improve learning?  Given that cloud-based CFA is a very recent phenomenon, and therefore not widely adopted, there is a paucity of research on its effectiveness relative to TFA. 

 

In an effort to provide some tentative answers to the question of the relative efficacy of CFA vs. TFA from a student’s perspective, a within-subjects (N=139) ex-post facto study was performed, relying on the perceptions of university students who had received a semester’s worth of STEM-related instruction using CFA after receiving one or more years of STEM-related instruction using TFA.  In an effort to gather information on the relative efficacy of CFA vs. TFA from a teacher’s perspective, a within-subjects (N=86) ex-post facto study was performed, relying on the perceptions of secondary school science teachers (biology, chemistry, physics, geoscience) who delivered a semester or more of instruction using CFA techniques after having taught one or more semesters of the same class with TFA techniques. Surveys were conducted in May 2013 to subjects in the Los Angeles metropolitan area. 

 

A brief summary of the findings from those teachers who have taught with CFA and TFA is included here and much more information about this and a variety of related studies can be found in the doctoral dissertation of Marten Tippens (2015).  In each question, teachers were asked to compare the perceived effectiveness of CFA vs. TFA.  Data from interviews with STEM professors and teachers, classroom observations, and focus group interviews with STEM teachers corroborated the findings of the survey. 

 

A summary of the findings of this survey can be found in Figures 15 (Awareness of Student Cognition and Behavior), 16 (Adjustments to Instruction), and 17 (Student Motivation and Engagement).  It was found that for each of the 3 categories and 11 sub-categories, teachers perceived that CFA was a more effective means of formative assessment than TFA by significant margins.  Teachers who have used CFA and TFA, report that they are substantially more aware of student understanding and behavior when using CFA (Figure 15).  Eighty-four percent report that CFA is a more effective or a much more effective technique of measuring student engagement, while 91% report that CFA is more effective or much more effective than TFA in measuring a level of student understanding.  Finally, 93% report that CFA is more effective or much more effective than TFA in providing information regarding student misconceptions.  (Tippens, 2015; Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015)

 

Teachers report that when they use CFA, they are not only more likely to be aware of the level of student engagement, level of understanding, and misconceptions than when using TFA, but they also report that they are much more likely to adjust instruction in response to what they have learned (Figure 16).  Eighty-nine percent of teachers who have used CFA and TFA in the secondary school classroom report that they are more likely, or much more likely to provide additional examples to clarify concepts when using CFA as opposed to TFA, while 83% report that they are more likely or much more likely to adjust the pace of instruction and 81% report that they are more likely or much more likely to adjust the sequence of instruction.  In addition, 86% report that they are more likely, or much more likely to provide alternate explanations, and 81% report that they are more likely or much more likely to repeat concepts. (Tippens, 2015).

 

Teachers who have used CFA and TFA report that they think that CFA is much more effective in stimulating student motivation than is TFA (Figure 17).  Eighty percent report that they perceive that students are more likely to recognize the value of tasks when teachers use CFA as opposed to TFA, while 77% report that that they think CFA is more effective or much more effective than TFA in fostering self-efficacy among students. Finally, 78% of these teachers report that they think CFA is more effective or much more effective than TFA in stimulating students to complete tasks. (Tippens, 2015; Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015).

 

Although these findings are encouraging, it must be emphasized that they are preliminary results of quasi-experimental studies of teacher perceptions based upon limited sample sizes in a specific geographic region. We encourage the development of future experimental studies that examine objective outcomes in geographically diverse settings.  The findings presented in this chapter have been based upon professor and teacher experiences with class sizes ranging from 5 to 40 students.  Much more work needs to be done to investigate the usefulness of these techniques in classes exceeding 40 students in size. The following section details directions for future research.

 

Future Research Directions

 

This chapter has described the use of the CFA model in comparison with TFA.  It has provided descriptions of techniques for assessing student understanding using cloud-based spreadsheets (quick-writes), conceptual development using cloud-base presentations, inquiry learning using cloud-based graphing of data, reading and writing skills using cloud-based documents, problem solving skills using the scan and post technique, and visual learning using cloud-based photo/movie albums.  As new technologies become available, more CFA pedagogies will need to be developed.  Although the research presented in this chapter shows CFA to be more effective than TFA in assessing student understanding and providing means for addressing learning needs in a timely way, more research needs to be done to test the generalizability of these findings.  As additional STEM professors and teachers employ CFA techniques, there will be greater opportunities to assess the relative merits of CFA and TFA in socially, geographically, and culturally diverse settings. Although current research not mentioned in this chapter addresses the following questions, future research should aim at answering them more completely. 

(1)       Instructor Formative Assessment - To what degree do instructors adjust their instruction to meet student needs when employing CFA compared to TFA?

(2)       Student Formative Assessment - How does the CFA compare to TFA in stimulating students to apply formative self-assessments such as self-monitoring and self-correcting?

(3)       Accountability / Engagement– To what degree are students engaged in the instructional process when CFA is employed compared to TFA?

(4)       Student Learning - What is the relationship between the type of formative assessment used and student learning outcomes?

 

Conclusion

 

Formative assessment, in its most effective form, allows instructors a window into their students’ minds.  As the student learning centered discussion in STEM fields proceeds, it is hopeful that STEM teachers bring themselves away from a strictly summative testing mindset that only shows what their students didn’t know to a formative assessment mind-set that shows what their students are learning. The methods of CFA hold great promise towards that goal. 

 

Cloud-based continuous formative assessment (CFA) allows STEM instructors to collect and archive student questions, answers, and thoughts using smart phones, tablets or computers, and assess student understanding in real-time. Informed adjustments to instruction may be made to redirect learning and insure better understanding. Peer-to-peer learning is achieved as students are exposed to and evaluate the collective response of data they generate. Through this practice, students experience and develop an appreciation for the importance of the collaborative professional environment in which career research scientists operate (Tippens, 2015; Herr & Rivas, 2014)

 

CFA techniques have been developed to promote students’ engagement, motivation, accountability, metacognition, and to realize the benefits of collaboration with their peers. With CFA techniques, students begin to think about their own thinking. Teachers can see if student comprehension pertaining to course material is on track. Students can also see if they have full comprehension of course material and whether their thinking is corresponding with the teacher’s and other students. Instructors apply CFA methods as often and as spontaneously as they would the standard classroom art of questioning. However, instead of asking for a show of hands where many students withhold participation, all students are required to submit answers. This increases their level of accountability for their own learning Herr, Rivas, Chang, Tippens, Vandergon, d'Alessio, & Nguyen-Graff, 2015).

 

Cloud-based Continuous Formative Assessment (CFA) is still in its infancy, but preliminary data suggest that it is potentially superior to traditional methods of formative assessment (TFA) in a variety of ways.  Surveyed teachers experienced in CFA techniques perceived CFA to be significantly more effective in measuring student understanding, engagement, and misunderstandings, and provides superior tools for adjusting instruction to meet measured needs.  Future research is needed to test the generalizability of these findings and determine the relationship between the mode of formative assessment and student learning gains. 

 

 

 

References

 

Akpanudo, U., Sutherlin, A., & Sutherlin, G. (2013). The effect of clickers in university science courses. Journal Of Science Education & Technology, 22(5), 651-666. 

Andrade, H. & Cizek, G. (2010). Handbook of Formative Assessment.  New York: Routledge.

Beatty, I., Feldman, A., & Lee, H. (2012). Factors that affect science and mathematics teachers' initial implementation of technology-enhanced formative assessment using a classroom response system. Journal Of Science Education & Technology, 21(5), 523-539. 

Beatty, I., & Gerace, W. (2009). Technology-enhanced formative assessment: A research-based pedagogy for teaching science with classroom response technology. Journal of Science Education and Technology, 18(2), 146-162. 

Bennett, K., Cunningham, A. (2009). Teaching formative assessment strategies to preservice teachers: Exploring the use of handheld computing to facilitate the action research process. Journal of Computing in Teacher Education, 25, 99-105.

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5, 7-74.

Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21, 5-31.

Clark, I. (2011). Formative assessment: Policy, perspectives and practice. Florida Journal of Educational Administration & Policy, 4(2) 158-180.

Common Core State Standards Initiative (2010a). Common core state standards for mathematics.Washington, DC: National Governors Association Center for Best Practices and the Council of Chief State School Officers.

Common Core State Standards Initiative (2010b). Common core standards for English language arts & literacy in history/social studies, science, and technical subjects. Washington, DC: National Governors Association Center for Best Practices and the Council of Chief State School Officers.

Davis, S. (2004, October 22). Technology does not improve teaching. California State University Northridge Daily Sundial. 1.

d'Alessio, M., & Lundquist, L. (2013).  Computer Supported Collaborative Rocketry: Teaching students to distinguish good and bad data like an expert physicist. The Physics Teacher, 51 (7), 424-427.

d'Alessio, M.A. (2014). What kinds of questions do students ask? Results from an online question-ranking tool. Electronic Journal of Science Education, 18 (5).

Dunn, J. (2011). The Evolution of Classroom Technology. Edudemic: Connecting Education & Technology.  Retrieved from: http://www.edudemic.com/classroom-technology.

Fleming, N., & Baume, D. (2006). Learning Styles Again: VARKing up the right tree!. Educational Developments, 7(4), 4.

Foley, B., & Reveles, J. (2014). Pedagogy for the connected science classroom: Computer supported collaborative science and the next generation science standards. Contemporary Issues in Technology and Teacher Education, 14(4). 

Flocker, M. (2006). Death by PowerPoint: A modern office survival guide. Cambridge MA: Da Capo Press.

Fluckiger, J., Vigil, Y., Pasco, R. & Danielson, K. (2010). Formative feedback: Involving students as partners in assessment to enhance learning. College Teaching, 58(4), 136-140. 

Gok, T. (2011). An evaluation of student response systems from the viewpoint of instructors and students. Turkish Online Journal of Educational Technology, 10(4), 67–83. 

Great Schools Partnership (2015). The glossary of education reform. Retrieved from:  http://edglossary.org/formative-assessment.

Guilfoyle, C. (2006). NCLB: Is there life beyond testing? Educational Leadership, 64(3), 8. 

Han, J., & Finkelstein, A. (2013). Understanding the effects of professors' pedagogical development with clicker assessment and feedback technologies and the impact on students' engagement and learning in higher education. Computers & Education, 65. 64-76.

Herr, N. & Rivas, M. (2010a). The use of collaborative web-based documents and websites to build scientific research communities in science classrooms. Proceedings of the 8th Annual Hawaii International Conference on Education, (pp. 851-858), Honolulu, HI: HICE.

Herr, N. & Rivas, M. (2010b). Teaching the nature of scientific research by collecting and analyzing whole-class data using collaborative web-based documents. In J. Sanchez & K. Zhang (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2010 (pp. 1029-1034). Chesapeake, VA: AACE.

Herr, N., Rivas, M., Foley, B., Vandergon, V., Simila, G., d'Alessio, M., & Potsma, H. (2011a). Computer Supported Collaborative Education - Strategies for using collaborative web-based technologies to engage all learners. Proceedings of the 9th Annual Hawaii International Conference on Education (pp. 2508-2509). Honolulu, HI: HICE.

Herr, N., Rivas, M., Foley, B., Vandergon, V., & Simila, G. (2011b). Using collaborative web-based documents to instantly collect and analyze whole class data. Proceedings of the 9th Annual Hawaii International Conference on Education (pp. 2497-2507). Honolulu, HI: HICE.

Herr, N., Rivas, M., Foley, B., Vandergon, V., d'Alessio, M., Simila, G., Nguyen-Graff, D. & Postma, H. (2012a). Employing collaborative online documents for continuous formative assessments. In P. Resta (Ed.), Proceedings of Society for Information Technology & Teacher Education International Conference 2012 (pp. 3899-3903). Chesapeake, VA: AACE.

Herr, N., Rivas, M., Foley, B., d'Alessio, M. & Vandergon, V. (2012b). Using cloud-based collaborative documents to perform continuous formative assessment during instruction. In T. Bastiaens & G. Marks (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2012 (pp. 612-615). Chesapeake, VA: AACE.

Herr, N., & Tippens, M. (2013). Using scanning apps on smart phones to perform continuous formative assessments of student problem-solving skills during instruction in mathematics and science classes. In T. Bastiaens & G. Marks (Eds.). Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2013 (pp. 1138-1143). Chesapeake, VA: AACE.

 

Herr, N., & Rivas, M. (2014a). Using cloud-based collaborative resources to conduct continuous formative assessment. Proceedings of the 12th Annual Hawaii International Conference on Education, Honolulu, HI: HICE. 

 

Herr, N., & Rivas, M. (2014b). Engaging Students in the Science and Engineering Practices of the Next Generation Science Standards (NGSS) with Computer Supported Collaborative Science (CSCS). Proceedings of the 12th Annual Hawaii International Conference on Education. Honolulu, HI: HICE.

Herr, N., Rivas, M., Chang, T., Tippens, M., Vandergon, V., d'Alessio, M., & Nguyen-Graff, D. (2015). Continuous formative assessment (CFA) during blended and online instruction using cloud-based collaborative documents. In Koç, S., Wachira, P., & Liu, X. (Eds.), Assessment in Online and Blended Learning Environments. Charlotte, NC: Information Age Publishing.

Hestenes, D. (1987). Toward a modeling theory of physics instruction. American Journal of Physics, 55(5), 440-454.

Hewitt, N., & Seymour, E. (1997). Talking about leaving: Why undergraduates leave the sciences. Boulder, CO, Oxford: Westview Press. 

Higher Education and Research Institute (2012). HERI reports: The freshman survey – The American Freshman: National Norms for Fall 2011. Retrieved from http://heri.ucla.edu/publications-brp.php

Huba, M., & Freed, J., (2000). Learner centered assessment on college campuses: Shifting the focus from teaching to learning. Community College Journal of Research and Practice, 24(9), 759-766.

Jesnek, L. (2011). Peer editing in the 21st century college classroom: Do beginning composition students truly reap the benefits?  Journal of College Teaching & Learning, 8(5), 17-24.

Jago, C. (2009). A history of NAEP assessment frameworks. Washington, DC: National Assessment Governing Board.

Jahan, F., Shaikh, N., Norrish, M., Siddqi, N. & Qasim, R. (2013). Comparison of students’ self-assessment to examiners assessment in a formative observed structured clinical examination: A pilot study. Journal of Postgraduate Medical Institute, 27(1), 94-99.

Kaleta, R. & Joosten, T. (2007) Student response systems: A University of Wisconsin System study of clickers, Educause Center for Applied Research Bulletin. 2007 (10), 4– 6. 

Kerr, C. (2001). Death by PowerPoint: How to avoid killing your presentation and sucking the life out of your audience. Santa Ana, CA: ExecuProv Press.

Linn, R., Baker, E., & Betebenner, D. (2002). Accountability systems: Implications of requirements of the No Child Left Behind Act of 2001. Educational Researcher, 31(6), 3-16.

National Academy of Sciences. (2006). Rising above the gathering storm. Washington, DC: National Academies Press. 

National Commission on Mathematics and Science Teaching for the 21st Century. (2000). Before it's too late. Washington, DC: Government Printing Office. 

NGSS Lead States. (2013). Next Generation Science Standards: For states, by states. Washington, DC: Achieve, Inc.

Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.

Parks, B.  (2012). "Death to PowerPoint!,” Bloomberg Businessweek, 30 August, 2012, Retrieved from http://www.bloomberg.com/bw/articles/2012-08-30/death-to-powerpoint.

Puentedura, R. (2009). Transformation, technology, and education, Retrieved from http://hippasus.com/ resources/tte/.

Pew Research Center, (2014). Mobile technology fact sheet.  Pew Research Internet Project. Retrieved from http://www.pewinternet.org/fact-sheets/mobile-technology-fact-sheet.

Polanyi, M. (1967). The tacit dimension. NY: Anchor Books.

Popham, W. (2008). Transformative assessment. VA: Association for Supervision and Curriculum Development.

Porter, A., McMaken, J., Hwang, J., & Yang, R. (2011). Common core standards: The new US intended curriculum. Educational Researcher, 40(3), 103-116.

Quigley, C., Marshall, J., Deaton, C., Cook, M., & Padilla, M. (2011). Challenges to inquiry teaching and suggestions for how to meet them. Science Educator 20 (1) pp. 55-61.

Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28, 4–13.

Savoy, A., Proctor, R., & Salvendy, G. (2009). "Information retention from PowerPoint; and traditional lectures.” Computers & Education 52 (4), pp. 858-867.

Schön, D. (1987). Educating the reflective practitioner. San Francisco: Jossey-BassWeiner.

Stulla, J. C., Majerichb, D. M., Bernackic, M. L., Varnumd, S.J., & Ducettee, J. P. (2011). The effects of formative assessment pre-lecture online chapter quizzes and student-initiated inquiries to the instructor on academic achievement. Educational Research and Evaluation, 17, 253-262.

Shepard, L.A. (2005). Formative assessment: Caveat emptor. The future of assessment: shaping teaching and learning. ETS invitational conference, New York: Educational Testing Service.

Tippens, M. C. (2015). The effect of computer supported continuous formative assessment on student learning in STEM classrooms. (Unpublished doctoral dissertation). California State University, Northridge; Northridge, CA.

Youssef, L. S. (2012). Using student reflections in the formative evaluation of instruction: a course-integrated approach. Reflective Practice, 13(2), 237-254. 

Wells, M., Hestenes, D. & Swackhamer, G. (1995). A modeling method for high school physics instruction. American Journal of Physics, 64, 114-119.