What are embedded assessments?

Embedded assessments (EAs) are opportunities to assess an individual’s progress and performance that are seamlessly integrated into programming activities.

EAs allow learners to demonstrate skills or knowledge, for example, through tasks that are embedded into the learning experience or activity itself. EAs provide an authentic and unobtrusive way to measure performance-based outcomes in a variety of settings (Becker-Klein et al. 2016).

EAs can be used as stand alone measures or in conjunction with more traditional evaluation measures such as surveys or interviews.

In general EAs should be:

  • performance-based rather than self-reported

  • integrated into an activity or learning environment

  • authentic to the project or setting

Sometimes they may even include a fourth characteristic of being broadly applicable to different settings or projects.

Advantages of using EAs

  • Not an additional evaluation burden on volunteers

  • Performance-based that mirrors real-life problem-solving situations

  • Encourages reflection on training of targeted skills


Challenges of using EAs

  • Requires dedicated time and resources to develop, implement, and score

  • Limited relevant assessment training and resources

  • Difficult to create a one-size-fits-all solution given the diversity of citizen science and informal science projects


Our Process

Through our Streamline EA project, we addressed the advantages and challenges of EAs by exploring avenues to “streamline” the development process for skill-based learning outcomes in citizen science. We explored two strategies:

  1. Secondary Analysis of Data Records

  2. Creating a shared EA


The goal of secondary analysis was to assess volunteers’ skill proficiency based on the data records they submitted coupled with records that document volunteers’ project engagement or the data collection context can be added to the analysis. This type of EA was ideal for identifying factors that impact participant accuracy of data collection. For a more detailed description see Peterman et al. 2022.


The goal of creating a shared EA was to develop a tool that can be used across multiple projects to measure a performance-based learning outcome that is authentic to a project and integrated into the participant experience. Our process consisted of 3 stages (Figure 1) and resulted in two validated shared EA templates, Notice Relevant Features and Record Standard Observations. For a more detailed description see Becker-Klein et al. in review).



.