Not long after I had received most of my student’s mid-semester survey results, I came across an AI tool that would create a song. The tool is Suno (suno.com) and it uses AI to create both the music and the lyrics. For some odd reason, I wondered if it might create a song based on by student survey results?
Well, it could use the narrative portions, but it could not discern the Likert scale results. So, I took that information and the narratives and placed it into Google’s Gemini to develop a summary of the results. But the summary was not poetic nor lyrical, so I asked Gemini to write some song lyrics based on the survey summary. Silly I know, but that is how this story started. I then added the lyrics to Suno and chose ska/reggae as the style of music. The students in the class got a kick out of it and the many faculty I shared it with got a good laugh.
And there it was, a song about my class and how the students felt about it.
It took a week or two, but this all got me thinking about the 25 years of student survey results I have seen as a teacher. I am pretty thick skinned, but having worked with faculty for many years, I know how difficult institutional surveys can be. Often, the most disgruntled students point to course deficiencies and that is what the faculty remember. Then there are deciles and scores and comparisons and the worry that these numbers will negatively affect the tenure process or rehiring as a part time faculty. I wondered if those single page pdfs could be shared in a different way? Maybe someone singing the results to them? Maybe just a friendly voice letting them know about the things they did well and the things they might improve upon.
I thought about how different the experience of a voice might be when delivering the information. Better? Worse?
I had been playing with Google’s NotebookLM (notebooklm.google.com) and I wondered what raw survey data might sound like in the “podcast” feature. It was really interesting to hear two “people” talk about my class. I wondered what other sources I might give NotebookLM to fill out the conversation and how I might customize the podcast.
At first, I added the course goals and objectives. But you can also add YouTube videos, so I added the seven-minute course introduction. It was an online class, so I added the QM standards and because the class was about pedagogy and technology, I added the ISTE standards. Adding these other sources really started to flesh out the podcast as the successes and challenges were framed against existing standards in teaching and learning.
Initially, I was just clicking "Generate" for the podcast. It took a few times for me to even see that I could customize it. When I did, I used prompts like, "How can this class be improved around high impact practices? Give specific examples. Give specific examples of how the workload could be more balanced?"
Below is one of the first results.
This went on for a several more weeks. Soon it was the end of the course, and I had asked the students for final evaluation with a few broad questions. They all wrote quite a bit about their experiences in the course, and I added it to the sources in NotebookLM. I also added a video about open pedagogy by David Wiley and another from Stephen Downes. I included a piece of writing about care and inclusivity in higher education. By adding all of those, the podcast was becoming pretty interesting and full of ideas for improvement backed up by solid resources.
One thing I learned was that even if the voices are AI generated, it feels good to hear positive things about your course and especially if the kind words are backed up with how some of your heroes would agree with what you are doing. The podcasters always start out with praise, and that feels better than the charts and decile numbers on the university student survey.
I keep trying to get the podcast to really focus on improvement and thus far I have already made a few adjustments to the class based on suggestions they made. Those suggestions were alluded to in the student narratives, but the whole of resources I had included in the NotebookLM made those comments seem different. I don’t know how to explain that. Maybe it was that in the student narrative it was one or two sentences among many and in the podcast, it was worded differently, and more detail was given. The human voice is amazingly powerful, even when it is not actually human.
One day, I realized I could prompt the podcasters to address me by name.
Yes, having your work praised by super fans is always nice.
Where am I today? Well, I started seeing if I could transform the two podcasters text into a single person giving me the same advice. I am interested to see if a single voice feels different? It has met with limited success as all of this has been done with free tools. There are time limits and character limits on the free versions of many of the tools.
Google’s NotebookLM has a beta feature that allows you to ask the podcasters questions. I have learned that if you state your name, they refer to you directly. I have also learned that the answers they provide are pretty amazing and that you can really drill down to specific details around assignments and course structure.
I am also working on getting a few faculty to try the same thing so I can see if others feel similarly.
One faculty I shared my adventure with added several years worth of university evaluations for a specific class and looked for consistent aspects of her teaching so that she might find ways to express that in writing. Perhaps for tenure packages. Tying that to academic language around best practices might create new ways for her to express the quality of her teaching. That is good.
Another is looking into using it as a means for transforming his course syllabus into a conversation. He noted that he felt as if he could improve on the purely text based version. And that is certainly true.
At this point my prompts include, "This podcast is for Todd, the instructor of the course. Speak to him directly. Describe ways the class can be improved related to the QM standards, High Impact practices, Open Education practices, and care and inclusion practices. Give specific examples of changes to assignments or delivery of the course. Identify where the course meets the program and school outcomes and where it is deficient. Identify where it hits and misses the course objectives." Stuff like that...
Below is the most recent version I have.
Well, now the thing has video. So, I guess I should mention that...
How might this add how we read/hear/watch our students thoughts about our classes? Will we listen longer? Will we hear things we had not observed in the charts and narratives of the surveys?
I don't know. But I am interested in learning about the experiences of others and how they improve courses with or without NotebookLM.