LESSON 1: Test Results and Educational Decision Making
Data-driven educational decision making refers to the process by which educators examine assessment data to identify student strengths and deficiencies and apply those findings to their practice. This process of critically examining curriculum and instructional practices relative to students' actual performance on standardized tests and other assessments yields data that help teachers make more accurately informed instructional decisions (Mertler, 2007; Mertler & Zachel, 2006). Local assessments—including summative assessments (classroom tests and quizzes, performance-based assessments, portfolios) and formative assessments (homework, teacher observations, student responses and reflections)—are also legitimate and viable sources of student data for this process. And what is the concept of assessment in information? Are to make decisions about instructional practices and intervention strategies is nothing new; educators have been doing it forever. It is an integral part of being an effective educational professional.
More often based on what I refer to as the "old tools" of the professional educator: intuition, teaching philosophy, and personal experience. The problem with relying solely on the old tools as the basis for instructional decision making is that they do not add up to a systematic process (Mertler, 2009). For example, as educators, we often like to try out different instructional approaches and see what works. Sounds simple enough, but the trial-and-error process of choosing a strategy, applying it in the classroom, and judging how well it worked is different for every teacher or instructor's.
What is the used of old tool's in this lesson? Based in my understanding and the observation of this lesson is that the old tool's is the most important and integral part of educational process and the learning process of the learners and also to the teachers. The old tools do not seem to be enough anymore (LaFee, 2002); it must be balanced with the "new tools". These new tools, which consist mainly of tandardized test and other assessment results, provide an additional source of information upon which teachers can base curricular and instructional decisions.
When we say a "systematic approach" is a process based on the application of clearly predefined and repeatable steps. Taking the data-driven approach to instructional decision making requires us to consider alternative instructional and assessment strategies in a systematic way.
LESSON 2: Fundamental Analytical Techniques
A measure of central tendency is a single value that attempts to describe a set of data by identifying the central position within that set of data. As such, measures of central tendency are sometimes called measures of central location. They are also classed as summary statistics. The mean (often called the average) is most likely the measure of central tendency that you are most familiar with, but there are others, such as the median and the mode. Another definition of measure of central mode is to help you find the middle, or the average, of a data set. The 3 most common measures of central tendency are the mean, median and mode. The mode is the most frequent value. The median is the middle number in an ordered data set.
Mean (Arithmetic)
The mean (or average) is the most popular and well known measure of central tendency. It can be used with both discrete and continuous data, although its use is most often with continuous data (see our Types of Variable guide for data types). The mean is equal to the sum of all the values in the data set divided by the number of values in the data set. So, if we have n values in a data set andthey have values x1, x2, x3…,xn the sample mean, usually denoted by
(pronounced "x bar"), And what is median the median would be a better measure of central tendency in this situation.The median is the middle value when a data set is ordered from least to greatest. The mode is the number that occurs most often in a data set.
What is normal distribution? is a probability distribution that is symmetric about the mean, showing that data near the mean are more frequent in occurrence than data far from the mean. In graphical form, the normal distribution appears as a "bell curve".
LESSON 3: Fundamental Techniques in Interpreting test Results
Test interpretation is the process of analyzing scores in a test and translating qualitative data into quantitative and grading into numerical. Score interpretation is same as test interpretation. And for my understanding test interpretation is also your feedback standing through the process in learning into numerical.
Scores “A summary of the evidence contained in an examinee's responses to the items of a test that are related to the construct or constructs being measured. Types of Scores are raw scores and scaled scores.
Methods of Interpreting Test Scores
Referencing Framework—is a structure you can use to compare a student’s performance to something external to the assessment itself.
Criterion Referenced Interpretation permits us to describe an individual’s test performance without referring to the performance of others. And the Norm-referenced Framework Norm-reference interpretation—tells us how an individual in compare with other students who have taken the same test.
The Common Types of Norm referencing Framework are grade score norm, percentile norm, standard scores norms and the stanines. The advantage and disadvantage of criterion and reference interpretation. For the advantages "Criterion-referenced" testing has the benefit of being an objective comparison to a standard. They can be considered fairer, because a person's score is not dependent on the performance of others. And for the disadvantages may not provide a comprehensive view of a student's abilities compared to their peers. While they are excellent for measuring mastery of specific skills or content, they don't offer insights into how a student's performance stacks up against a larger group.
LESSON 4: Using technologies used in test analysis, interpretation and evaluation.
Existence of technology is found to be beneficial to all especiallyin data analysis. Manual method of solving certain statistical tool is now out of the stream because in the first place, this take longer time to arrive with the final result. Today, through the aid of technology, statistical software has been introduced. Statistical Software (SS) is a vital tool for research analysis, data validation and findings. The emergence of statistical software in the twenty-first century has helped different researchers in the physical and social science to improve in the quality of research. And one of the statistical softwares commonly used today is SPSS (Statistical Package for the Social Sciences).
Test Procedure in SPSS Statistics (Pearson Product)