Prototypes (Concrete Tasks)

Joint Inference for Student Strategy Prediction—Task Lead: PI Deepak Venugopal, The University of Memphis

The goal of this research is to predict strategies that a student is likely to follow to solve a problem. Specifically, we will develop a joint inference model that predicts the strategy jointly with the student’s mastery over concepts since both predictions are inter-dependent and can therefore inform each other. To develop our model we will combine Markov Logic Networks (MLNs) with Deep Neural Networks (DNNs). MLNs are interpretable probabilistic models that represent domain knowledge (e.g., rules that relate strategy to mastery), while DNNs learn latent representations from the dataset that can be used to simplify the MLN distribution (e.g., similar student strategies can be grouped together). We plan to evaluate our approach on structured student interaction datasets collected from MATHia where students solve a problem in discrete steps guided by an automated tutor.

Software package release for Student Strategy Prediction using a Neuro-Symbolic Approach: https://github.com/anupshakya07/SSPM and the associated paper: [PDF]

Domain Models—Task Lead: PI Philip Pavlik, The University of Memphis

In this project the researchers will be focused on the models of content (i.e. domain models) that define the skills and knowledge in any academic area. Such domain models can be used to understand student learning progress as components of adaptive instructional systems (AISs) in which the domain model helps organize the adaption in response to specific needs. Such domain models can be described by experts, but it has also been shown that automatic methods may be used to create and improve these quantitative models of skills and their relationships. In this project we will review prior progress in this area and compare a variety of methods to improve academic domain models using learner data using machine learning and statistical methods. The product of this work will be a review paper that attempts to classify and compare the methods both qualitatively and quantitatively.

Scaling Empirical Refinement of Domain Models & Instructional Design—Task Lead: PI Stephen Fancsali, Carnegie Learning

This project scales up cutting-edge, deep learning approaches to modeling student performance to compare and contrast these methods with more traditional Bayesian statistical approaches deployed in existing, widely-used adaptive instructional systems like Carnegie Learning’s MATHia. We plan to use emerging differences and similarities between these approaches to target data-intensive work to semi-automatically improve underlying cognitive or skill models of the math domain that drive adaptive instruction in MATHia. We aim to then experimentally test such improvements in large-scale experimental studies using Carnegie Learning’s UpGrade platform, establishing a pipeline for data-intensive learning engineering research that “closes the loop” between big data and practical instructional improvement.

Autoencoders for Educational Assessment—Task Lead: PI Dale Bowman, The University of Memphis

The goal of this concrete task is to design and develop interpretable neural networks that can objectively quantify performance evaluation in educational settings. Typically, the hidden layers of neural networks are abstract and not open to interpretation, making them seem like a “black box”. Using a variational autoencoder (VAE) whose activation function for the decoder is a logistic sigmoid distribution enables the weights and biases to be interpreted in terms of parameters in an item response theory model (Curi et al., 2019).

Causal Modeling-Literature Review —Task Lead: PI Andrew Olney, The University of Memphis

This project will create a proposed AI system (ESCANoRR) that will predict R&R by deeply analyzing a scientific paper’s claims, evidence, and the arguments that connect claims to evidence (ACE). ESCANoRR will emulate an expert human reviewer by evaluating ACE as written, ACE when the data of the paper is reconstructed and reanalyzed using original methods (reproducibility), and ACE when the data simulated as new samples and reanalyzed using best-practice methods (replicability). To achieve these goals, we will overcome the following key technical challenges: Extraction of arguments, claims, and evidence (ACE) from scientific papers Evaluation of the internal consistency based on the extracted ACE Construction of a causal model (CM) based on the extracted ACE Reconstruction of original data and new samples using the CM Report on R&R.

ENGAGE SmartDesk & SmartClassroom: An IoT supported Data Collection System for Tracking Student Behavior Across time in Public Education—Task Leads: PI Laura Casey & Susan Elswick, The University of Memphis

This LDI project will focus on creating/establishing normative behavior data sets for comparative analyses, develop automated data collections systems to increase the collection of data frequency and accuracy, use the existing data sets to guide best practices for future data collection and increase access to behavioral data sets.

Investigate AI methods to support accurate root cause analysis of learner data—Task Leads: PI Robert Sottilare & Ross Hoehn, Soar Technology, Inc.

Under Research Objective 1, we plan to investigate AI-based RCA methods in the literature that demonstrate effective causal inference, but may or may not have been applied to learning science or educational technology. For this phase, will examine several causal inference frameworks and their associated causal models including, but not limited to: • Structural Causal Models (SCM; Pearl, 1998; Cinelli, Kumor, Chen, Pearl, & Bareinboim, 2019) – broad application. Our proposed goal is to research and develop an AI-based RCA solution for the Learner Data Institute that will enable instructional systems to identify and address root causes that influence learner experiences (e.g., engagement), learning outcomes (e.g., performance and transfer of learning), and instructional effectiveness. Specifically, we want to identify and address the root causes of answers several questions to investigate AI-based RCA methods to support accurate root cause analysis of learner data to optimize learner experiences, learning outcomes, and AIS effectiveness.

Context-aware personalized recommender models by utilizing users’ wearables, IoTs, and smart devices at edge implemented with transfer learning—Task Lead: PI Bashir Morshed, The University of Memphis

As effective learning activity should not only consider the users' learning history but also depend on the context of enquiry with minimal burden to the users, the proposed task will develop a model and a prototype for the recommender such that it can present the most suitable contextual and personalized learning activities to the users by passively collecting, processing, and analyzing the sensor data from users' available wearables and smart devices along with metadata from surrounding IoTs to autonomously and accurately predict the context using real-time edge computing implemented with transfer learning of AI trained models.

Using Big Data to Develop a New Method for Examining the Relationship between Student Socioeconomic Status and Achievement —Task Lead: PI Todd Zoblotsky , The University of Memphis

This project will look at educational data in relation to free-reduced lunch (FRL). The proposed study intends to answer the following research questions: What is the impact of using school-level vs. student-level FRL data on the analysis and interpretation of student achievement outcomes? How similar are outcomes when using school-level vs. student-level FRL data? Do results vary by grade level (elementary, middle school, high school)? Do results vary by subject area (reading or math)? What is the impact of using other measures of student disadvantage (e.g., parent education level) vs. student-level FRL data on the analysis and interpretation of student achievement outcomes? How similar are outcomes when using alternative measures of student disadvantage vs. student-level FRL data? Do results vary by grade level (elementary, middle school, high school)? Do results vary by subject area (reading or math)?

Data Coding Tool —Task Lead: PI Zhiqiang Cai & David Williamson Shaffer, The University of Wisconsin-Madison

Our goal is to find solutions to address the two mentioned challenges and develop computer tools to help researchers efficiently and accurately code large learning data. We attempt to find the following: More efficient ways to find codes with relatively low base rate (say, 0.1% ~ 5%)? Ways to decrease missing coding rules and thus increase coding recall.

The Talk Shop—Task Lead: PI Chip Morrison, The University of Memphis

The Talk Shop is a design for a configurable, general-purpose, AI-enhanced platform capable of supporting various kinds of task-embedded cooperative discussions among small groups of humans and virtual conversation agents— and collecting large-scale transcript corpora from these interactions. The design calls for a simple but powerful authoring system that will allow teachers, researchers, and other users to quickly build activities around selected tasks, assign users to discussion groups, configure the behavior of an intelligent moderator and other agents, and collect data from these interactions. The system will build on existing technologies, and serve as a testbed for emerging ones. It can be deployed alone, or in conjunction with other platforms—including other kinds of adaptive instructional systems (AISs) and intelligent tutoring systems. The Talk Shop will generate large-scale transcript corpora, which will be available for basic research and system improvement (e.g. through reinforcement learning). Further, by meeting an urgent need for effective methods of structuring online discussion groups, we intend that The Talk Shop will have a transformative impact on both 21st-century learning ecosystems, and on the application of learning science and data science to the study of how humans learn from each other in the context of task-embedded discourse. We are in an early design phase, and look forward to receiving feedback and assistance from as many LDI participants as possible. Given sufficient resources, we intend to have a working prototype available for testing in approximately 18 months.

Validating a Framework for Learning Experience Design —Task Lead: PI Andrew Tawfik, The University of Memphis

One goal of this project is to define the term ‘learning experience design’ and set the trajectory for research that follows. Another goal is to use a data-driven approach to predict users’ behaviors models human-interaction within a learning technology context (what we call ‘ learning experience design’). Specifically, two overarching subgoals of this work are (1) to explicate the ontological and epistemological underpinnings relative to UX design as applied in LDT (i.e., learning experience design (LXD)) and (2) to foreground the importance of LXD as an emerging design paradigm.

Systematically Evaluate Adaptive Instruction Systems (AIS) —Task Lead: PI Xiangen Hu, The University of Memphis

We want to evaluate the effectiveness of AIS by systematically analyzing the system activity stream of AIS applications.

AutoTutor for Adult Reading Comprehension Training —Task Leads: PI Art Graesser, John Sabatini, Xiangen Hu, Su Chen, & Zhiqiang Cai2, The University of Memphis & 2The University of Wisconsin-Madison

John Sabatini, Art Graesser, and Xiangen Hu have been notified that AutoTutor-ARC will receive additional funding on a new IES grant that will see how AutoTutor-ARC is used by teachers and adults in actual literacy centers. The sample of literacy centers will hopefully grow during the course of this grant. For example, ProLiteracy (proliteracy.org) has agreed to host AT-ARC and work with us to scale up the system, as well as revising it. ProLiteracy works with over 1000 literacy centers in the US, so there is potential to collect a large corpus of data. If 100 centers use AutoTutor-ARC, with 10 instructors per center, and 10 students per instructor, there will be data for 10,000 students. Moreover, John Sabatini has psychometrically validated measures of reading comprehension (SARA, GISA) that were developed on other IES grants (one being a center grant for the Reading for Understanding initiative). However, there are notable limitations of the scope of the new IES grant. It is beyond the scope of the grant to develop an intelligent recommender system that gives a small number of options on what lesson the instructor should do next or the adult learner should do next via self-regulation. There is also no learner record store on the history of the learner in using AutoTutor-ARC and any measures that are available via psychometric tests, self-reports, demographics, and psychological characteristics derived from interactions with AutoTutor-ARC. The proposed concrete task will attempt to fill these gaps.

Variability in Motivational Feedback Systems during Learning with Intelligent Tutoring Systems (ITS): Exploratory Opportunities Using Big Data At-Scale —Task Leads: PI Christian Mueller, Leigh Harrell Wiilliams, Stephen Fancsali & Steve Ritter, 1The University of Memphis and 2Carnegie Learning

In the present project, we will build on previous work examining the role of motivation, metacognition and self-regulation in student learning with intelligent tutoring systems (e.g., LearnLab, Carnegie Mellon University, Metacognition and Motivation Research Thrust) by exploring how variations in message delivery (e.g., content, mechanism, timing, etc.) impact student learning during engagement with intelligent tutoring systems. Primarily, we will do this in two phases. Phase I will serve as discovery to gain a better understanding of the existing literature as it pertains to how motivational messages are delivered to students during learning with intelligent tutoring systems (e.g., Bernacki, Aleven, & Nokes-Malach, 2014). Additionally, archived data (e.g., DataShop, Ritter, Anderson, Koedinger, & Corbett, 2007) will also be explored to assess whether additional analyses can inform study design in Phase II of the project. Drawing from our findings of the literature search and exploration of archived data in Phase I, in Phase II we will design a small pilot study and then a larger study based on pilot study findings to explore how motivational message delivery (i.e., content, timing, message/no message, etc.) may differentially impact students during learning with intelligent tutoring systems.

Exploring and Assessing the Development of Students’ Argumentative Writing Skills —Task Leads: PI-John Sabatini, Art Graesser, Xiangen Hu and Vasile Rus , The University of Memphis

We aim to develop new capabilities for using online, collaborative environments to facilitate the development of student argumentation (critical discussion) skills and identify malleable factors that influence students’ progress in writing at the middle school level. We will use scenario-based assessment tasks (Sabatini, O’Reilly, Weeks & Wang, 2019) to evaluate students’ argumentative writing skills as outcomes. The evidence derived from our research will provide actionable information about the malleable factors that promote the development of middle school students’ argumentative writing skills, useful for instructional development projects.