link to paper: https://www.mdpi.com/2079-6374/13/7/718
Karalekas et al. proposes the use of educational robot (ER) kits when teaching children about machine learning. The reasoning behind utilizing the ERs is that they can easily be integrated into the four categories of basic ML knowledge students should acquire through a course. The primary benefit of the ER is their ability to bring machine learning concepts into the physical world, allowing the learner to interact with.
Six ER platforms were investigated based off the following criteria: out-of-the-box AI modules, small form factor, and having a programming language suitable for children. Karalekas et al. does not perform any experiments with learners, but rather hypothesizes on each of the six platforms usefulness in various classroom environments. Of the platforms assessed the lego MINDSTORMS platform was deemed most suitable for the task. This platform, not reliant on a car-based design, could remain stationary, connect to a computer via a cable, and exhibit adaptability for various lessons.
In their conclusion, Karalekas et al. present theoretical guidelines for teaching ML with robotics. While some of these guidelines have been reported in other research, such as emphasizing interactivity with machine learning, the authors assert that the ER kits can leave room for tangible experimentation for a young learner. For instance, varying the number of hidden layers of a deep-learning algorithm can result in very distinct behaviors of the robot for a given task.
The contribution of this paper is positing that a physical medium for machine learning is useful for a young learner, as well as reviewing a select number of platforms for education scenarios. The work could've been further enhanced by a workshop or set of experiments utilizing any the discussed kits and analyzing their effectiveness.
Williams et al. assessed the student learning outcomes with multiple assessments that were done on a tablet. The tablet medium was chosen based on the ability to show movies for stories, and pictures for questions. Children also seem comfortable around tablets, making them an even stronger case for an assessment's medium.
For each of the activities the children participated in there was an assessment of three to four questions. These questions each had a correct answer and covered a different aspect of the topic than other questions for the topic. Many of the questions required skills that would be perceived from the Theory of Mind assessment, and it was assumed that the results of that would highly influence the results of the topic assessments.
The children initially took a variation of the Theory of Mind assessment that was made for the popbots project. After the first activity, knowledge-based systems, the children were asked questions that were not recaps of what occurred but rather furthering the experiment for the children. For example in the second question of the assessment they ask the following: "sally plays paper five times, what does the robot think she will play next". This scenario is likely not to have occurred during their activity but requires the children to have a deep and thorough understanding of how the robot understands the environment.
The pre activity assessments were in line with previous works and reaffirmed the ability for children to understand intelligence. After the activities children were more likely to have a different perspective but their previous experiences with robotics had a large hold on their perceptions.
The most surprising thing I found in the Mahipal et al paper was the variations in how the four students interpreted the network and its results. Each of the students had a distinct reliance on the suggested prompts, with one almost never using, one heavily using, and two splitting their time with them. It is intriguing that at the two ends of the spectrum of utilizing the prompt the two at the polar ends had positive experiences but the two closer to the middle had negative experiences.
What surprises me about the students that had a negative perception towards the applicaiton, is that they did have an understanding of how the network makes it's decisions. The students could correctly match filter patterns to outputs, and noticed that a dog may be inferred as a cat since it "kind of looks like a cat".
Finally, I find it interesting that the four students interpreted the CNN as being skewed towards cakes. The network had an even split of category images, each category having 50k. It is reasonable to assume that the test/train split kept the proportions of the testing data the same as the overall distribution of images. While The students could understand how different images could be classified as something else, the dog as a cat, they did not make the connection that what they were drawing could easily be interpreted as a cake to something that has only seen six object types.
This application revolves around the theme "computers perceive the world using sensors". There will be a task that an ML agent needs to achieve, to do this task there may be 1toN sensors available. The children will be able to turn on and off various sensors that are available to the ML agent and run the simulation to see if the agent can achieve the task. To incorporate another theme, learn from data, we could allow the agent to run multiple times to see how close it can get to achieving the task.
With this application I'd want to extend the work done in DoodleIt, explainable image recognition, and apply it to middle-school grade mathematics. The application would have a canvas for a user to write in a math equation such as "5 x 7 = ?". The model would show that it understands the problem as what is 5 times 7 using a scaled down version of how DoodleIt shows the filters and layers. It would then calculate the result as 35 and generate a middle-school grade proof for why that is the case.
This application covers two of the Five Big Ideas that have been developed. The first theme covered is "computers perceive the world using sensors", with the sensor being the canvas. The second theme covered by the application is that computers can learn from data. The application is learning what the user is writing through its CNN, as well as when generating the proofs it has ingested data to learn the answer. I'm not sure if this would cover the fourth theme, comfortable human interaction, as it will hopefully not show as a challenge but rather be obvious that comfortable interaction can occur.
This web application would help middle-school grade children with grammar on sentences. The application would take in a sentence or paragraph, and boil down what is what. For example what is the predicate in a given sentence.
Another feature could be having synonym/antonym for words. This feature could utilize Googles Word2Vec to show how some words are found to be similar in models, such as taking the vector for the word "king" subtracting the vector for the word "man" and adding the vector for the word "woman" will result near the vector for the word "queen".
This application covers the second theme "Agents maintain models/representations of the world and use them for reasoning" with the synonym generator. The application also covers perceiving the world through text language.