As you can see from the video, in 2023, people had already started questioning whether A.I. would have the capacity to code games. Through continued prompts and edits, it would be possible to get A.I. to code mobile games that are suitable for the assessment of students.
Richardson and Clesham (2021) identified that teachers who were using online assessments were merely transforming their paper and pencil versions onto screens. As a result, they suggested the use of mobile technology in more assessments as technology in smartphones had the capacity to include gaming aspects (Richardson & Clesham, 2021). As a result, this could potentially increase the security of the exam and allow teachers to spend less time marking and spend more time teaching (Richardson & Clesham, 2021).
Perhaps in the near future, it will become possible for A.I. to create specific mobile games so that outcomes can be assessed in a personalized manner. With the addition of the ability to build in accommodations, each student will have multiple modes to demonstrate their learning. However, it is important to note that the privacy of student information is shared when using A.I. engines.
Math teachers are familiar with GeoGebra as a helpful math tool. GeoGebra has already developed ways to integrate math into A.R. challenges, so who's to say the next thing would not be making A.R. games?
Building on the idea of the storylines in mobile games, a hint of augmented reality might cast a more realistic application of knowledge and skills into the real world. This could be presented through the increased opportunities to collaborate with multiple people to achieve a common goal. By building connections across the globe, it becomes possible to engage with like-minded peers worldwide.
The question now is how to effectively use the data that is generated about students' knowledge inside the classroom through the creation of these A.R. games.
Right now, mobile games in education are typically used for formative assessment. These kinds of tools let students practice, get instant feedback, and stay engaged through things like leaderboards, points, or rewards.
When it comes to summative assessment, mobile games are not used as often. There are a few reasons for this. Teachers and institutions sometimes worry about whether game-based systems are valid and reliable enough for high-stakes results. A lot of game platforms focus on multiple-choice or recall-style questions, which do not always capture deeper thinking (Mimouni, 2022). Some people also feel that using games for final evaluations risks making the learning seem less serious. On top of that, there are practical issues like whether all students have access to devices, whether the internet connection and band width are strong enough (Richardson & Clesham, 2021).
Even with these challenges, many researchers see potential for mobile games in summative contexts. As devices improve, educational games can use adaptive levels, richer simulations, and AI scoring. This could create assessments that go beyond memory recall and measure things like problem solving, collaboration, or applying knowledge in real-world situations (Chavez & Palaoag, 2024; Xu et al., 2024).
Looking forward, we may see a mix of both. Mobile games will likely remain most common in formative learning, but they could also start to support summative assessment by collecting performance data over time. For this to work, game design would need clear standards, teachers would need training, and schools would need safeguards to make sure results are fair. accurate, and secure.
PBLL elements are widely used in educational technology because they are intuitive, easy to implement, and resonate with learners. Points and badges provide immediate feedback and visible recognition of progress, leaderboards introduce social motivation, and levels create a clear sense of advancement (Deterding et al., 2011). In mobile game-based assessment, these features are especially attractive because they translate well to app interfaces by generating simple and quantifiable data for tracking learner performance.
At the same time, reliance on PBLL has limitations. While these mechanics can spark short-term motivation, they risk encouraging learners to chase rewards rather than focus on deeper understanding (Hanus & Fox, 2015). Leaderboards in particular can alienate students who rank low, potentially undermining confidence and participation. Overuse of PBLL may also reduce intrest over time by creating lower returns in sustained engagement.
Rather than stopping PBLL altogether, future designs may benefit from integrating these elements into richer systems that emphasize collaboration, mastery, and meaningful narrative. By doing this, PBLL can serve as an entry point to engagement without being the sole driver of learning.