At the conclusion of the workshop, learners will be able to record the similarities and differences between qualitative and quantitative research.
Upon completing the workshop, learners will write an example of a research question? The learner will be able to determine which research method to use for quantitative research, qualitative research, or both.
Upon completing the workshop, learners will be able to describe 2 types of reliability and 2 types of validity. The learner will explain why it is important for a research study to have both reliability and validity measures.
This website will give you a brief overview of the importance of understanding educational research and the role it plays in the design of instruction. Instructional Designers are professionals that utilize the science of learning, the guidelines of a discipline, the learner's needs, and educational research to develop, deliver, and evaluate instruction (Salkind, 2019). Instructional Designers consult the best practices in education, use the appropriate instructional design models, curate learning materials, improve instruction, then analyze the project's effectiveness (Brown & Green, 2020). For further research read, The Essentials of Instructional Design: Connecting Fundamental Principles with Process and Practice by Abbie H. Brown and Timothy D. Green and Exploring Research by Neil J. Salkind.
It is paramount to plan your research with reliability and validity of the sample size, methodology, and measurement tool. Reliability shows that the methods and measurements of an experiment in question are reproducible. The process and methodology are proven to be credible. If the research instruments and procedures can be replicated with fidelity time and time again, it is said to be reliable. To the extent that the same research instrument and procedure fits the question or problem that needs to be investigated, it is said to be trustworthy, in other words, it has validity.
Validity looks at the accuracy of a measurement. Does the result of an experiment measure what it claims to measure? The outcomes of a test that match the subject matter that is being tested can be considered trustworthy. Keep in mind that validity is measured in degrees from low to high validity. Validity considers the context of the results of a test. Do the results of the test fit the purpose of the research?
Reliability
Validity
Reliability has three components; test-retest, interrater, and internal consistency (Salkind, 2019). The test-retest reliability shows that at two different times the results of the experiment can be repeated using the same sample. This form of reliability testing is very time-consuming and may prove challenging because of external sources. For example, a teacher gives a spelling test to a group of thirty Kindergartners in September. The teacher tests the same group of Kindergartners in the spring, however, some external factors come into play that may cause errors in the measurement, four students are receiving some intervention services after school four days a week. With interrater reliability raters or observers have agreed upon norms about the variables tested or researched in the experiment. For example, two educators are responsible for observing and rating a colleague during a lesson. Each rater has a Google Form to fill out during the observation, however, some items on the form are ambiguous and leave a lot for interpretation; how is engagement measured? Lastly, the internal consistency reliability shows the uniformity of the different sections of the measurement tool.
There are three types of validity: construct, content, and criterion. If the outcomes of a test support an underlying construct it is considered, construct validity. For example, if a test is designed to measure the motivation of first-year teachers, certain established variables need to be evaluated. Is there a “gold standard” test to establish what motivation looks like that can be referenced? Content validity is considered the less demanding of the three validity types. When establishing content validity consider using a sample of the content that is being tested. Also, consult an expert on the content being tested. The coverage of all the aspects of the content should be covered concisely. Finally, criterion validity measures concurrent or present performance or future performance, also known as predictive validity. The criterion is set up to compare performance either from the past or future. For example, students who received the letter grade of A in an American History course are considered good candidates for taking an AP American History course as well. This is a prediction. Now take a look at the current AP American History student's grades from American History and their grades from the AP American History course then determine if their grades from the regular history course (test) predicted their success as an AP student.
Article Title:
Prelicensure nursing students' perspectives on video-assisted debriefing following high fidelity simulation:
A qualitative study.
Authors, Zhang et al. (2019), wanted to explore prelicensure nursing students’ perspectives on the debriefing process after high fidelity simulations. In particular, to what effect does video-assisted debriefing have on the nursing student’s attitude, self-esteem, and learning? The lack of studies on nursing students’ attitudes and perceptions with (VAD), the relatively new debriefing tool, is a major reason for the study itself. The article explores this question using an exploratory qualitative research method called focus groups.
The focus groups consisted of six group interviews of twenty-seven prelicensure nursing students from Singapore. The students’ roles were that of “performer” or “observer.” The setting took place at a university in Singapore. The interview questions were semi-structured.
Three themes and eight subthemes emerged. The three themes were:
going from verbal debriefing to VAD
the positives and negatives of video-assisted debriefing
setting the groundwork for success using VAD (2019, pp. 1).
The article clearly states that quantitative studies on VAD fall short of sharing students’ full range of emotions and perspectives from their high-fidelity simulation experience (2019).
Zhang and colleagues hope the findings from this study serve as a framework for what interventions to consider in the development and implementation of debriefing with video after medical simulations have occurred (2019).
Implications for Practice
I see focus groups as a beneficial qualitative research method for what I do in library media and information literacy. Zhang et al. used a qualitative research methodology to get at the heart of how their subject matter, prelicensure nurses, felt about their performance during a high-fidelity simulation (2019). The objectivity of a video-assisted debriefing assisted the nursing students with what “really” happened during the simulation exercise. This article illustrates the importance of using new technology to improve practice. As an educator, it is beneficial to upgrade my instructional practices with technology that improves the efficiency of the process.
Article Title:
Effectiveness of clicker‐assisted teaching in improving the critical thinking of adolescent learners.
This study takes a look at the impact that using student response systems (clickers) has on lessons with critical thinking as its goal. According to Pisheh et al., there are studies about this subject matter, however, the studies mostly focus on the college population (2018). Critical thinking is considered the “gold standard” in the education world. A student that can think critically can analyze, evaluate, and synthesize information that is encountered. Pisheh and colleagues state that teaching students how to think critically is a real-life skill worthy of pursuing (2018). In contrast, a student that struggles with critical thinking may not live up to their full potential in society (Flores et al., 2012).
Implications for Practice
The implications of assisting students that have higher needs for intervention cannot be ignored. The SRS research administered to 156 eighth-grade females from Tehran (EG) was found to be more engaging, provide immediate feedback, and relatively easy to facilitate (2018).
In my line of work which includes education and information literacy from the library media perspective, technology that can improve student learning outcomes is worth the investment. Notably, Pisheh et al. selected adolescent learners as their research subjects. Not many studies have used elementary and middle school students to research the benefits of clickers. As a result, the data discovered showed that students on the low end of the performance spectrum that received the SRS intervention have an opportunity to catch up to their peers (2018).
As a library media specialist that works closely with subject matter experts (teachers), I can present technology that can help with differentiating their lesson plans. The approach I should take will be one of collaboration and not of a judgment of how well they address the needs of all students in their class. My best approach is to attend their TBT (teacher-based teams) meetings. In those meetings, subject area teachers share strategies that will be implemented for an agreed-upon time. They also analyze the data from online tests EdIncites and MAP, then make determinations on what interventions may be needed. Going forward, I will attend at least one of the TBT meetings a month. This action will allow me to know what is needed in our building and what role I can play to intervene.
List the similarities and the differences between qualitative and quantitative research.
Write an example of a research question. Take a look at the research methods below.
State whether it is qualitative, quantitative, or both.
focus groups experiment interview observation survey case study
Why it is important for a research study to have both reliability and validity measures?
Describe 2 types of reliability and 2 types of validity.
Brown, A. H., & Green, T. D. (2019). The essentials of instructional design: Connecting fundamental principles with process and practice (4th ed.). Routledge.
Flores, K. L., Matkin, G. S., Burbach, M. E., Quinn, C. E., & Harding, H. (2012). Deficient critical thinking skills among college graduates: Implications for leadership. Educational Philosophy and Theory, 44(2), 212–230. https://doi.org/10.1111/j.1469-5812.2010.00672.x
Ghanaat Pisheh, E. A., NejatyJahromy, Y., Gargari, R. B., Hashemi, T., & Fathi-Azar, E. (2019). Effectiveness of clicker-assisted teaching in improving the critical thinking of adolescent learners. Journal of Computer Assisted Learning, 35(1), 82–88. https://doi.org/10.1111/jcal.12313
Salkind, N. J. (2017). Exploring research.
Zhang, H., Goh, S. H. L., Wu, X. V., Wang, W., & Mörelius, E. (2019). Prelicensure nursing students’ perspectives on video-assisted debriefing following high fidelity simulation: A qualitative study. Nurse Education Today, 79, 1–7. https://doi.org/10.1016/j.nedt.2019.05.001