This section contains our course journey. These artifacts showcase my understanding of assessment principles, the development of evaluation tools, and their application in measuring student learning . Each entry reflects my growth in designing fair,valid, and meaningful assessments that support both academic achievement ad values formation.
List of Reporters and their assigned Topics
Team 1: Ma'am Sittie Hayya Papandayan & Sir Bernabe Jr. Eroy
Cognitive functioning refers to how the brain processes, understands, and applies information in learning. Problem-solving skills help students analyze situations, think critically, and find effective solutions. The dimensions of knowledge explain different levels of understanding, from basic facts to deeper conceptual and procedural knowledge. The revised Bloom’s Taxonomy organizes learning into levels, helping teachers design activities that develop higher-order thinking skills. A Table of Specifications ensures assessments cover the right content and cognitive levels, making tests more balanced and effective.
Team 2: Ma'am Vea Jane Aranaydo & Ma'am Sheila Mae Pielago
Traditional assessments, like paper-and-pencil tests, remain widely used because they efficiently measure student knowledge and skills. Objective tests, such as multiple-choice and true-or-false questions, provide quick and reliable results but may not always assess deep understanding. Essay tests allow students to explain their thoughts and demonstrate critical thinking, but they can be time-consuming to grade and may be affected by scorer bias. Following proper guidelines in constructing test items helps avoid common errors and ensures fairness in assessment. Using clear criteria and well-designed rubrics for scoring essay tests makes evaluation more consistent and objective.
Team 3: Ma'am Florabel Luza, Sir James Ryan Carpio & Sir Neil Arvin Bermudez
A well-constructed test must be valid and reliable to accurately measure student learning. Validity ensures that a test measures what it is supposed to, while reliability ensures consistency in results. Several factors, such as unclear instructions, poor test items, and student conditions, can affect both validity and reliability. Different approaches, like test-retest and internal consistency methods, help estimate a test’s reliability. Item analysis helps improve both objective and essay tests by identifying which questions are effective and which need revision.
Team 4: Ma'am Eden Ticon, Ma'am Chinna Chatto & Ma'am Earla Pimentel
Authentic assessment measures real-world skills by focusing on how students apply knowledge in meaningful tasks. Difficulty and discriminability indices help evaluate the quality of test items, ensuring they are appropriately challenging and effectively differentiate student performance. Options analysis improves multiple-choice questions by checking if distractors are effective. Performance-based assessments, whether product- or process-oriented, emphasize skill demonstration and practical application. Sharing insights, classroom experiences, and comprehension checkups allow students to reflect on learning and ensure understanding of key concepts.
Team 5: Ma'am Mia Jean Samoranos & Sir Marlon Emol
A portfolio is a collection of student work that showcases progress, learning, and achievements over time. Creating clear and well-structured rubrics, whether holistic or analytic, ensures fair and consistent assessment of student performance. Different types of portfolios, such as developmental, showcase, and assessment portfolios, serve various educational purposes. e-Portfolios provide a digital platform for organizing and presenting student work using tools like Google Sites, Canva, or Seesaw. By using portfolios effectively, educators can promote reflective learning and a deeper understanding of student growth.
Team 6: Sir Jesler Duro, Ma'am Gleafiera Estrada & Ma'am Partosa
Descriptive statistics summarize data through measures like mean, median, and mode, while inferential statistics help make predictions and generalizations. Understanding group relationships allows educators to analyze patterns and trends in student performance. Normal distribution, often represented as a bell curve, shows how scores are spread in a population. Measures of central tendency (mean, median, mode) describe the average performance of a group, while measures of variation (range, variance, standard deviation) show how scores differ from one another. Using these statistical tools helps in making fair, data-driven decisions in assessment and evaluation.
We learned how to create clear, measurable, and well -structured learning objectives that align with educational goals.
Each of us was assigned a topic to report on, helping us develop our communication and presentation skills.
We examined test questions to determine their dificulty, effectiveness, and fairness in assessing student learning.
We analayzed and wrote articles on educational topics, enhancing our critical hinking and writing skills.
We took quizzes in different formats, including oral, face-to-face and online via Canvas, to reinforce our learning.
We were required to solve statistical problems on the blackboard, applying concepts like mean, median, standard eviation, kurtosis and skewness.