So here's my idea for how to test educational video games' efficiency and efficacy: a scientifically set up test of performance.
At the beginning of the test period (which should be done with k-12 or early college students during a break in classes) each of the subjects in the experiment takes a test to determine performance. The results are recorded and kept for comparison to results at the end of the test. These subjects have already been split into nine groups; based on the type of learning experience they will have and the amount of time they will get with the interactive learning platform provided.
There are a few things to notice about all the different learning experiences: all record the players' score, displaying it to them and observers, and track various other metrics; all have the same graphical style and methods of interacting with the test itself; and all are taken on a computer using a mouse to direct feedback, though the third also integrates simple keyboard feedback to control the player's character in action segments.
The first three groups take a simple interactive digital test with feedback; they take it for either a single half-hour session, a hour and a half meeting, or three hour and a half blocks over the course of the week for two weeks. At the end of the two weeks they take another test.
The second three groups run through an interactive storybook with six rather long divergent storylines revolving around a central protagonist. Each of these storylines will run them through a short narrative after they make their selection, show what was right or wrong about their answer, and provide them with multiple possible options. Players can either attempt to play the storylines through to completion, or play a new one each session, though only the highest duration group is likely to finish more than two story lines.
The third cluster of groups will run through the interactive storybook, but in addition to the short narrative (and partially replacing it), a series of simple games is played. For example, the player would make their choice as in the second cluster, but instead of merely reading about the unfolding events they get to take the role of the central protagonist in the story and play a series of simple games; for instance, the player may be fighting a fearsome dragon as a cartoon-styled knight; they would use certain buttons to position themselves on certain spaces in the scene, and one to leap vertically with a sword in an attempt to take down the dragon threatening the populace. These games attempt to move the players through quickly to the next test question and to the narrative that follows, lasting about fifteen or twenty seconds each with a five-second warm-up.
At the end of the two weeks of testing, the subjects take another test to determine performance changes, and these tests are compared to the first tests of each subject. The various gathered feedback and metrics are analyzed to determine the best way to set up the learning experience, should it be proven effective.