Assessment‎ > ‎

Types of Assessment

I. Indirect assessment: “An analysis of reported perceptions about student mastery of learning outcomes” (Allen, 2007).  In other words, the learning is reported rather than demonstrated.  This can be reported by the student or from another person, such as an employer. 

a.       Examples of indirect assessment (Zelna, n. d.) and (Allen, 2007):

        •  Surveys, including satisfaction surveys, exit surveys, alumni surveys, and employer surveys
        •  Focus groups on experiences or attitudes
        •   Interviews on experiences or attitudes
        •   Data on enrollment
        •   Data on graduation or retention
        •   Student demographic data

b.      Why or when is this pedagogically useful?

  •   When student feedback is needed on something, such as a new presentation method
  •   When the goal is to understand “why and how students learned what they learned,” (Wright, 2009)
  •   When the goal is to understand learning byproducts (e.g. vocational success) or environmental influences (e.g. attitude toward academic advising)
  •   When it logically makes more sense to answer your questions using existing data

II. Direct assessment: “An analysis of students behaviors…in which they demonstrate how well they have mastered learning outcomes” (Allen, 2007) or other information or skills

a.       Examples of direct assessment (Zelna, n. d.) and (Allen, 2007)

  •     Portfolios and e-portfolios
  •     Capstone projects
  •     Embedded questions
  •     Exams, including pre-tests and post-tests
  •     Performance on case studies
  •     Performance evaluation (e.g. juries for performing arts students)

b.      Why or when is this pedagogically useful?

  •     When the goal is to understand “what your students know and can do, and how well, in relation to your learning outcomes” (Wright, 2009)
  •      When the goal is to gauge how well students are grasping a new topic to determine whether more focus is needed on that topic
  •      To determine whether a student has mastered knowledge to meet set professional standards, such as by achieving a certain level on a licensure examination
  •      To determine whether a student has mastered a performance medium, such as playing an instrument or graphic design, such that it meets standards set by faculty

III. Selected-response assessment, otherwise known as objective assessment: A test or question where “the student reads a relatively brief opening statement and selects one of the provided alternatives as the correct answer” (“Assessment,” 1997).  These types of assessments are known as “objective” because there is no room for subjectivity in grading; either the correct answer was selected or it wasn’t.

a.       Examples of selected-response assessment

  •       Multiple choice questions
  •       True/false questions
  •       Matching questions

b.      Why or when is this pedagogically useful?

  •       When the main objective is to measure “recall of facts” (“Objective Assessment,” 2006)
  •       When testing on material where there is only one correct answer or group of answers
  •        When the focus is on the lower-level thinking skills (“knowledge” and “comprehension”) in Bloom’s Taxonomy.  Selected-response questions can, however, be written to evoke higher-level thinking skills such as analysis and evaluation (“Bloom’s Taxonomy,” n. d.)
  •         When a quick, efficient, and easy-to-grade assessment is needed (“Writing Selected Response,” n. d)

IV. Constructed-response assessment, otherwise known as authentic, performance, alternative, and subjective assessment: An assessment that requires “a student to develop his or her own answer in response to a stimulus, or prompt” (Stecher et al., 1997).  Unlike with selected-response assessment, however, “neither the prompts nor the responses need be written” in constructed-response assessment (Stecher et al., 1997).  These types of assessments are known as “subjective” because the grading or scoring is based on the judgment of a qualified individual, usually the teacher or a panel of teachers.  

a.       Authentic assessments are specific constructed-response assessments that require students to “perform real-world tasks that demonstrate meaningful application of essential knowledge and skills” (Mueller, 2010).

b.      Examples of constructed-response assessment

  •      Essays and essay test questions
  •      Performance tests or activities, which can range from answering questions based on a case study to a physiology lab practical to an instrumental recital
  •      Capstone projects
  •       Portfolios and e-portfolios
  •      Exhibitions, such as senior graphic design shows or film viewings

c.       Why or when is this pedagogically useful?

  •       When assessing a product that does not meet the “one correct answer” criterion, such as a response to an essay prompt
  •       When assessing higher-level thinking skills such as application and synthesis (“Bloom’s Taxonomy,” n. d.) rather than the ability to recall knowledge
  •       When the goal is to assess a product that cannot be usefully or accurately measured through written means, such as vocal juries or the ability to construct a birdhouse
  •        When used in conjunction with the popular constructivist learning philosophy (Mueller, 2010)

V. Formative assessment: “The process of soliciting feedback and information from students regarding their perceptions of their progress throughout the course” (http://www.luc.edu/learningtech/Assessment_Terms.shtml).  Formative assessment is intended to gauge student learning in order to provide feedback and other needed assistance to help the student better grasp a concept or concepts. 

a.       Examples of formative assessment

  •        Providing constructive comments and suggestions for improvement while grading a student’s essay
  •       Grading on a rubric that allows a student to see what is expected and how they can improve
  •       Quizzes that allow students to track their learning progress (“Formative vs. Summative Assessments,” n. d.)

b.      Why or when is this pedagogically useful?

  •        This allows the student to have a sound perception of how well they are learning a concept while still having time to improve.  This also provides students with input on how to improve.
  •       When the goal is “to improve instructional methods and student feedback throughout the teaching and learning process” (“Formative vs. Summative Assessments,” n. d.)

VI.  Summative assessment: “Assessment used to make a judgment of student competency after an instructional phase is complete” (“Formative vs. Summative Assessments,” n. d.).  Summative assessment is also intended to gauge student learning, but as a final measure of how well a student ultimately grasped a concept or concepts.  Here, the feedback generated is used primarily to assist the instructor rather than the student (“Types of Assessment and Evaluation,” n. d.).      

a.       Examples of summative assessment

  •       Final exams
  •       Standardized exams, including licensure exams
  •       Assignments that are graded with the final score and no suggestions for improvement

b.      Why or when is this pedagogically useful?

  •       To determine how well students have grasped certain concepts in order to drive curriculum modifications for future classes
  •       To provide the instructor with useful feedback on their instruction in order to make informed decisions on how to proceed.  For example, using scores on a midterm exam to determine how to adjust what content is taught between the midterm and final exam

 

 

References

Allen, M. J. (2007).  Direct and indirect assessment strategies.  Retrieved from http://westoahu.hawaii.edu/pdfs/vcaa/workingdocs/directandindirect.pdf


Assessment.  (1997).  Retrieved from http://college.cengage.com/education/pbl/tc/assess.html  


Bloom’s taxonomy.  (n. d.)  Retrieved from http://fcit.usf.edu/assessment/selected/responsea.html


Formative vs. summative assessments.  (n. d.).  Retrieved from http://fcit.usf.edu/assessment/basic/basica.html


Mueller, J. (2010). What is authentic assessment?   Retrieved from http://jonathan.mueller.faculty.noctrl.edu/toolbox/whatisit.htm 


Objective assessment.  (2006).  Retrieved from http://vudat.msu.edu/objective_assess/  


Stecher, B. M., Rahn, M., Ruby, A., Alt, M., Robyn, A., & Ward, B.  (1997).  Types of assessment.  In Using alternative assessments in vocational  

          education.  Retrieved from http://www.rand.org/pubs/monograph_reports/MR836/  


Types of assessment and evaluation.  (n. d.).  Retrieved from http://web.mit.edu/tll/assessment-evaluation/types.html


Wright, B. D. (2009).  Approaches, reproaches: The joy of methods [PowerPoint slides].  Retrieved from http://www.aacu.org/meetings/institute_gened

           /documents/BW_methods.ppt


Writing selected response assessment items.  (n. d.)  Retrieved from http://fcit.usf.edu/assessment/selected/responseb.html

 

Zelna, C. L. (n.d.). Basic assessment plan development. Retrieved from

http://www.ncsu.edu/assessment/presentations/assess_process/basic_plan_devt.pdf