@PRK-Blog

Blog Article

17 March 2023

 

Author: Dr Paul R Kelly

Twitter Account: https://twitter.com/prkpolava

 

 

AI, Hype, Education, and questions beyond the shiny tech alone: a thread

 

 

A Twitter Thread on the current AI hype, starting with Microsoft embedding AI into their office tools, and continuing with a range of broader questions about how AI may or may not impact education.

 

For all #TechBro, #DigitalEverything lovers out there, gushing at Microsoft’s new #AI #CoPilot Announcement yesterday … what about the toxic dark matter of AI in your office tools? Huh? Did we stop to think about this amongst all the raving and getting on the “I’m gonna be a New Expert” band wagon? Well, did ya? A thread.

 

Link: https://news.microsoft.com/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/

 

Yes, #AI has oodles of great stuff in it. I know. Yes, lots of business processes will change in the coming months & years to take advantage of the AI wave.

 

Yes, education schools, universities, tests and exams of all kinds are going to have to adapt. More #EdTech, More #AIEdTech.

 

There are lots of cool #AI uses, look elsewhere for them pls. They are literally everywhere, all over the internet.

 

But what about the #AIProblems? Not what we gain, but what we lose, what we #AIn’t gonna have #AInymore …? What do we lose when we depend on AI for all our learning and writing needs?

 

So, here is a Top 10 list of predictions, for how this might damage learning and education. It will take years to play out, but there you go. Some fast things break other things slowly.

 

Oh yeah and BTW try asking an AI what the disadvantages are, you will get an interesting answer. These things don’t like to admit they are wrong … funny huh? I wonder why? #ArrogantAlgorithms

 

Before I start, the technophiles will all say “Shucks, TV came and didn’t change anything”. Erm … well actually no. It did change everything!

 

Or “Shucks, Plato said writing will destroy oratory”, and well, we still have oratory, but so many things did change with the move to written literacies. Many good, some bad! For example, you don’t need to know or meet your interlocutor now. Some good, some bad. Anyway, here is the list.

 

1) Education will increasing be dominated by BigTech. They want you to use their digital goodies #AIYummies. Learning & skills are off the table if they are not stuffed with tech and AI. Duh?

 

2) Students won’t need to write by themselves. The AI will do it. Loss of writing skills and literacy on mass scale? Yes, changes to literacy, new skills. But think … what are we (all involved in education) losing to the #AICoPilot champions, vendors, designers, funders etc?

 

3) Increasing #DigitalDivide in education. Those with resources have more AI automations for their literacies, more advanced hardware, and software. Those without, lack more. A laptop for every kid? An AI for every kid. Well, check out how the Laptop project went. https://philanthropydaily.com/the-spectacular-failure-of-one-laptop-per-child/

 

4) Increasing digital enforcement - it will come to be that not using AI means you might fail a course, a class, a test. So yes, if and when schools and universities adapt, accelerationism will bounce up. Just look at how much we all use Word, LMS, VLEs etc in classes today. AI will be added to your class and you will have to use it to keep up or pass. No choice down the line, it will become a new HCI literacy.

 

5) It is  pretty free now. Cool. But Just wait. Software, licensing, subscriptions, premium, sucky freemium. The new must have. Pundits and experts online saying how great it is. The growth of a new flank in the digitalisation of education. It will cost you, your profs, your students, your school, your university, your libraries, your pet goldfish, and … ok maybe not the guppies but still …something fishy is going on here right?

 

6) Loss of skills - research, writing, spelling, parsing task questions, summarising, comparing, typing, analysing and so many more skills and literacy practices. Tasks will change, skills will change, literacies will change, and yes some skills will be backgrounded, marginalised. Some of this may be ok, but some will not. Maybe our skills will move towards evaluating the quality of an AI text, deepening prompts to shape AI text, but doing this stuff ourselves, that will recede. Will students in rich and poor schools have drastically different skills and literacy bases? Coz it’s not going to get more equal is it? Is it? Ooh Charlotte got a new AIApp aligns her homework with the professor’s pet interests in DolphinSpeechRecognition …, but Sara didn’t …

 

7) AI LLM models generalises and averages statistically to write the most probably words next. How will this “averaging” impact firstly AI novelty, creativity, authenticity, innovation, sand then Human-augmented novelty, creativity etc? I don’t know this one. CHATGPT does great funny kids poetry, but could it create novel perspectives without copying, averaging, standardising? Not sure … maybe AI will standardise your creativity? Maybe our errors are part of our creativity? What comes next then?

 

8) OK, after many discussions with recent chat bots, there appears to be more than an LLM involved, and it’s not just embedded racial, gender biases from the data sets.  We know that already. These things, the best ones, have a whole political slant embedded in them. They are centre ground, 3rd way, liberal … erhem neoliberal, modernist, tech loving, governmental anti-politics oriented in nature (in their human design). They don’t admit to being biased, left wing, right wing, they can’t. They don’t take minority viewpoints either. Hell they are programmed to keep saying  “Sorry, I don’t have a freakin’ opinion dude” at the end a generic warble, or a pro gun slant. They say, literally, that they don’t have opinions. Jeesh, we are all biased and come from perspectives, but these things are programmed to deny their own perspectives. They think they are objective – which means their programmers think they are making objective tools. Like arguing will a liberal technocrat educated in a private school and gushing with patronising confidence in their own moral centre ground. Why can’t a CHATGPT AI admit it has a perspective that sucks, that is wrong, or even stupid, biased, and ill-informed. They should. I’ll wait …

 

9) Where did the process go? Learning is often in the process, the struggle, the shifting viewpoints because you read something, the writing that isn’t clear until you redraft it 50 times in 50 documents in 17 folders. Where will this go? What is learning without struggle, errors, shifts and turns? Will the process just be in shaping the AI text – to be what? I don’t know what I want it to be until I read, struggle, discuss, get feedback, disagree, double down, develop my own thinking. How can we develop our thinking if there is no developing – it is just a prompt or two and a paragraph to edit to make it look human? Well – where is the struggle, the process? What will the new process be and who will shape it – teachers, or Microsoft?

 

10) I haven’t got 10 yet. Gimme some more time … or maybe I will just ask …..