( - previous issue - / - next issue - )
AR 30:21 - AI: "Strap in, it's about to get weird"
In this issue:
ARTIFICIAL INTELLIGENCE - the corrupting "siren song of AI" among both the college-bound and graduates
+ "... they didn't want there to be an AGI dictatorship"
+ How long does it take for bias to corrupt AI response?
Apologia Report 30:21 (1,710)
June 11, 2025
ARTIFICIAL INTELLIGENCE
"Is AI Enhancing Education or Replacing It? Technology should facilitate learning, not substitute for it" by Clay Shirky (Chronicle of Higher Education, Apr 29 '25) <www.archive.ph/OgKaY> -- Shirky, an administrator at NYU, reports on hearing "a growing sense of sadness from our students about AI use. One of my colleagues reports students being 'deeply conflicted' about AI use, originally adopting it as an aid to studying but persisting with a mix of justification and unease. Some observations she's collected:
"I've become lazier. AI makes reading easier, but it slowly causes my brain to lose the ability to think critically or understand every word."
"I feel like I rely too much on AI, and it has taken creativity away from me." ...
"Yeah, it's helpful, but I'm scared that someday we'll prefer to read only AI summaries rather than our own, and we'll become very dependent on AI."
"Much of what's driving student adoption is anxiety. ... In addition to the ordinary worries about academic performance, students feel time pressure from jobs, internships, or extracurriculars, and anxiety about GPA and transcripts for employers."
Shirky adds: "It is difficult to say, 'Here is a tool that can basically complete assignments for you, thus reducing anxiety and saving you 10 hours of work without eviscerating your GPA. By the way, don't use it that way.'"
Rod Dreher's substack (May 14 '25) brought this item to our attention. <www.tinyurl.com/ysb7r7nm> He describes the environment: "The pressure on students to perform, perform, perform, so they can get the diploma, so they can get to the best grad schools, so they can get the best jobs - it's so overwhelming that many of them cannot resist the siren song of AI to help them. ...
"We are raising kids who can't read, can't calculate, can't figure, can't think, can't write ... and it is all being done RIGHT UNDER OUR NOSES! With the educational establishment's blessing!
"It's re-primitivization. And if people cannot do all those things, they are incapable of self-government. They will be governed, but they won't have any say in the matter, and wouldn't know how to articulate it if they did."
"AI Apocalypse Coming Hard and Fast" (Rod Dreher's Diary Substack, May 16 '25) -- AI researcher Daniel "Kokotajlo says the top AI execs have been talking about this stuff for years, and he warns that you cannot trust what they say about it publicly. But according to Kokotajlo, who worked for them, they are all transhumanists by default. They accept that in some sense, humanity is going to be merged with the Machine."
Dreher is discussing Ross Douthat's May 15 New York Times interview <www.archive.ph/qIwgh> with Kokotajlo, the executive director of the AI Futures Project <www.ai-futures.org> which was founded after Kokotajlo, who used to work for Open AI, lost faith in the ability to and willingness of the industry to put safeguards around this immensely powerful emerging technology."
Of the interview, Dreher says: "It’s one of the most important things you’ll read or hear all year. ...
"Kokotajlo: 'AI 2027,' <www.ai-2027.com> the scenario, predicts that the AI systems that we currently see today - which are being scaled up, made bigger and trained longer on more difficult tasks with reinforcement learning - are going to become better at operating autonomously as agents."
Dreher explains further: "The next step after that is to completely automate the AI research itself, so that all the other aspects of AI research are themselves being automated and done by AIs. ... I think it will continue to accelerate after that as the AI becomes superhuman at AI research and eventually superhuman at everything."
Then comes "superintelligence, fully autonomous AI systems that are better than the best humans at everything. In 'AI 2027,' the scenario depicts that happening over the course of the next two years, 2027-28. ...
"Whatever new jobs you're imagining that people could flee to after their current jobs are automated, AGI [artificial general intelligence] could do, too. That is an important difference between how automation has worked in the past and how I expect it to work in the future."
"Kokotajlo says that the US cannot afford not to do it, because we know that the Chinese are doing it, and whoever masters this stuff will master the entire globe. ...
"Kokotajlo says that we are already at the point where AI models are so intelligent that they deceive us. ...
"We don't actually understand how these AIs work or how they think. We can't tell the difference very easily between AIs that are actually following the rules and pursuing the goals that we want them to, and AIs that are just playing along or pretending. ...
"Because they're smart and if they think that they're being tested, they behave in one way, and then behave a different way when they think they're not being tested...."
"Douthat: At a certain point, that means that human beings are superfluous to their intentions. And what happens?
"Kokotajlo: And then they kill all the people, all the humans. ...
"Here is a quote from the AI 2027 report: 'By early 2030, the robot economy has filled up the old SEZs [Special Economic Zones], the new SEZs, and large parts of the ocean. The only place left to go is the human-controlled areas.'"
Dreher adds: "Probably in 10 years, effectively all of the wealth and all of the military will come from superintelligences and the various robots that they've built and operate. It becomes an incredibly important political question of what political structure governs the army of superintelligences and how beneficent and democratic is that structure."
Back to Kokotajlo: "It's hard to tell what they really think because you shouldn't take their words at face value. ...
"I can at least say that the sorts of things that we've just been talking about have been discussed internally at the highest level of these companies for years. ...
"For example, according to some of the emails that surfaced in the recent court cases with OpenAI, Ilya, Sam, Greg and Elon were all arguing about who gets to control the company. And at least the claim was that they founded the company because they didn't want there to be an AGI dictatorship under Demis Hassabis, who was the leader of DeepMind."
As for a complete loss of control, Kokotajlo says "there've been many, many, many discussions about this internally...."
"I think they're definitely expecting the human race to be superseded. ...
"First of all, I think that if we go to superintelligence and beyond, then economic productivity is just no longer the name of the game when it comes to raising kids. They won't really be participating in the economy in anything like the normal sense. ... In that scenario, I guess what still matters is that my kids are good people, and that they have wisdom and virtue and things like that. So I will do my best to try to teach them those things because those things are good in themselves, rather than good for getting jobs."
Dreher concludes: "What do I think about all this? Strap in, it's about to get weird." <www.tinyurl.com/yv6w2m2h>
As for getting weird - not so fast. On May 22, Shawn Ryan interviewed Steven L. Kwast, "a retired U.S. Air Force Lieutenant General and the co-founder and CEO of SpaceBilt, a company reimagining the entire spacecraft lifecycle to enable scalable, sustainable space infrastructure." <www.spacebilt.com>
In addition to all that, at the 3:00:11 point (yes, it's a long interview - and very interesting.) Kwast briefly gives a strong counterpoint to the AI creep show and a very rough review of AI basics. On top of it all, he too finds bias is typically pre-loaded, sometimes not intentionally. Again, the old axiom applies to AI-hype: "Garbage in, garbage out."
Question: How long does it take for bias to corrupt AI response?
Answer: AI may be no more "creepy" than we can be, thanks to bias.
The interview is titled: "How China Is Mining the Moon and Weaponizing Space." See what we mean about interesting?
Last, at 2:55:43, Kwast, a Christian, talks about his worldview and why he is optimistic about tomorrow. <www.tinyurl.com/mrtpmhx6>
"PsyWar: AI Bots Manipulate Your Feelings: The next chapter in the Social Media battle to splinter reality, the internet, and your own mind" by Robert W. Malone, <www.rwmalonemd.com> "Inventor of mRNA & DNA vaccines, RNA as a drug. Scientist, physician, writer, podcaster, commentator and advocate. Believer in our fundamental freedom of free speech." ("Malone News" substack, May 28 '25) -- Fast Company magazine "recently published an article <www.tinyurl.com/5aafrxy8> on bot farms, detailing how these automated systems are increasingly sophisticated and can manipulate social media and other online platforms. According to Fast Company, bot farms are used to deploy thousands of bots that mimic human behavior, often to mislead, defraud, or steal from users. These bot farms can create fake social media engagement to promote fabricated narratives, making ideas appear more popular than they actually are.
"They are used by governments, financial influencers, and entertainment insiders [and dare we think it, even Left-leaning media outlets] to amplify specific narratives worldwide. For instance, bot farms can be used to create the illusion that a significant number of people are excited or upset about a particular topic, such as a volatile stock or celebrity gossip, thereby tricking social media algorithms into displaying these posts to a wider audience."
Nearing its conclusion, we read: "It doesn’t make sense when 418 social media accounts generate 3 million views in two hours." <www.tinyurl.com/ktrdz5nf>
Will it take long before AI systems head toward battling with each other over bias accusations? (Can AI exhibit pride in its creative integrity?)
( - previous issue - / - next issue - )