Reflections
What is AI Literacy? Competencies and Design Considerations
This article helped me understand what exactly Artificial Intelligence is and how we can start to think about how to build and design the project this semester. I think I fell into the category of the “superhuman fallacy” before reading this paper, as whenever I’ve heard the term “AI” the first thing my mind would go to is human intelligent examples from popular fiction (Ex Machina, 2001 Space Odyssey, A.I., etc.). It’s important when we’re developing this project to consider how students with non-technical backgrounds perceive AI and how we can start to align their preconceptions with the reality of the technology.
I especially liked the 17 competencies needed for developing AI literacy, particularly Competency 10 (Human Role in AI) as I think it’s important to understand (as dramatic as it sounds) that it’s AI that’s at the whims of Human design, not the other way around. Design Consideration 8 (Critical Thinking) was also another good recommendation from the researchers. As AI products become more mainstream in the consumer space, I think we need to focus on building a tool that teaches young learners to be critical of how we use these tools. We teach ethics and critical thinking in other disciplines, so it just makes sense to teach students how to be more conscious consumers of AI tools (be it generative, machine learning, or just the algorithms of the websites they visit).
The most surprising thing to me is how old of a technology AI is and how little research there is on teaching AI concepts to students or people with little to no programming experience. It kind of makes me a little fearful of the lack of understanding of these new technologies and the speed that AI products are entering the consumer space.
Developing Middle School Students’ AI Literacy
I really liked this paper as it showed a practical application of some of the Design Considerations and Competencies addressed in Long and Magerko’s research into AI Literacy. We also got to see some possible hurdles that we’ll most likely encounter when we’re conducting our own research towards the end of the semester.
I appreciated that the researchers focused on middle school students that were from underrepresented groups in STEM, but I would have hoped that they also focused on students who didn’t have any experience at all with programming. They were limited by the pandemic, but it’s something to keep in mind when we’re choosing to test our project with students.
It was interesting to see how formative children’s middle school years are for their development and career aspirations. It really highlighted how important it is to focus on teaching children these skills during that critical window during their development. Any earlier and I think there’s a chance that they’ll be too young to understand the lessons they’re being taught, and any later I think you run the risk of children getting frustrated with the technical part of teaching AI literacy. From my own personal experience, I can say that I didn’t initially pursue a STEM degree because I was scared of the math classes that would have been required, had I been introduced to STEM concepts earlier in school maybe that would have been different. Like they note in the paper “students who do not express STEM-related aspirations at age 10 are unlikely to develop them by the age of 14”.
A Chat with ChatGPT
When brainstorming ideas with ChatGPT on how to engage students with AI concepts, it gave the idea of introducing the concept of AI in music creation as one of the responses. ChatGPT then gave the suggestion to explore how AI can be used in recognizing beat boxing patterns. This was my favorite response from that conversation:
Me:
What kind of AI tools would be used to analyze beatboxing patterns?
ChatGPT:
Analyzing beatboxing patterns using AI involves the use of machine learning algorithms that can recognize and understand rhythmic and percussive elements in audio. Here are some AI tools and techniques that can be used for analyzing beatboxing patterns:
Audio Signal Processing:
Basic audio signal processing techniques can be employed to extract features from beatboxing sounds. This includes analyzing the frequency spectrum, amplitude, and timing of different beats and sounds within the audio signal.
Machine Learning Models:
Train machine learning models, such as neural networks, to recognize patterns in beatboxing. Supervised learning can be used with labeled datasets containing examples of different beatboxing patterns. The model learns to identify specific patterns and classify them accordingly.
Onset Detection:
Onset detection algorithms can be used to identify the starting points of each beat in a beatboxing sequence. This is crucial for understanding the rhythm and timing of different sounds.
Spectral Analysis:
Utilize spectral analysis techniques to examine the frequency content of beatboxing sounds. Different beatboxing sounds often have distinct frequency characteristics, and AI algorithms can learn to distinguish between them.
Pattern Recognition:
Train AI models for pattern recognition to identify common patterns in beatboxing, such as snare hits, kicks, and vocalizations. These models can then analyze audio sequences and recognize the presence of specific beatboxing elements.
Deep Learning for Audio:
Deep learning architectures, such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), can be employed for more complex beatboxing pattern analysis. These models can capture both spatial and temporal dependencies in the audio data.
Feature Extraction:
Extract relevant features from audio signals, such as rhythm, tempo, and dynamics. Feature extraction is essential for providing input data to machine learning models.
Open Source Libraries:
Leverage open-source audio processing libraries and frameworks. Libraries like LibROSA in Python provide tools for analyzing audio signals and extracting features relevant to beatboxing.
Real-Time Processing:
Explore real-time processing capabilities to analyze beatboxing patterns as they are being performed. This can involve using algorithms that can adapt and respond dynamically to live beatboxing input.
Interactive Applications:
Develop interactive applications that allow users to beatbox, and the AI system provides feedback or analysis in real-time. This can be a fun and engaging way for students to experience the intersection of AI and beatboxing.
When working on beatboxing pattern analysis, it's essential to provide students with hands-on experience in collecting and labeling datasets, training models, and interpreting the results. This practical approach can enhance their understanding of both AI and beatboxing concepts.
Me:
Would the same thing be used for language recognition?
ChatGPT:
While there are some similarities, language recognition and beatboxing pattern analysis involve different approaches and techniques due to the nature of the data. Here are some distinctions and considerations for language recognition:
Audio Signal Processing:
Both language recognition and beatboxing pattern analysis involve audio signal processing, but the focus is different. Language recognition may involve analyzing speech signals to identify linguistic features, while beatboxing analysis focuses on recognizing rhythmic and percussive elements.
Speech Recognition Models:
For language recognition, specialized models are often used for speech recognition. Automatic Speech Recognition (ASR) systems leverage models trained on large datasets to transcribe spoken words into text.
Mel-Frequency Cepstral Coefficients (MFCCs):
In language recognition, MFCCs are commonly used as features for representing speech signals. These coefficients capture the spectral characteristics of speech. Beatboxing analysis may also use MFCCs but with a focus on rhythmic patterns.
Natural Language Processing (NLP) Techniques:
Language recognition involves natural language processing techniques to understand the content of spoken or written words. Techniques such as sentiment analysis, part-of-speech tagging, and named entity recognition may be applied to language data.
Language Models and Classification:
Language recognition often employs language models for classification. These models are trained to identify the language or dialect being spoken. In contrast, beatboxing analysis may use models for pattern recognition to identify specific rhythmic elements.
Feature Extraction for Language Recognition:
Language recognition may involve extracting features related to speech characteristics, intonation, and phonetic patterns. The goal is to capture linguistic information. Beatboxing analysis, on the other hand, focuses on features related to rhythm, timing, and percussive sounds.
Language Embeddings:
Advanced language recognition systems may use language embeddings, which represent languages in a continuous vector space. This allows for understanding language relationships. Beatboxing analysis may not require such embeddings but may benefit from rhythm embeddings.
Multimodal Approaches:
Language recognition can also involve analyzing visual cues, such as lip movements. Beatboxing analysis may be more focused on audio signals, but multimodal approaches could enhance understanding by incorporating visual elements.
Real-Time Processing for Language Recognition:
Language recognition systems may need to operate in real-time for applications like voice assistants. This requires efficient and low-latency processing. Beatboxing analysis may also benefit from real-time processing but with a focus on capturing dynamic rhythmic patterns.
While there are differences, the fundamental principles of signal processing, feature extraction, and machine learning apply to both language recognition and beatboxing analysis. It's important to tailor the approach based on the specific characteristics and goals of each application.