Welcome! Please consider how your attitude affects your and other students' experiences of the lesson.
Be respectful, come prepared, and show interest to have the best possible educational experience.
🕒~20 minutes ✍️ Create and Correct 🎯Grammar Skills
Together by yourself or with your classmate(s), complete the worksheet.
First, rewrite the sentences into participle clauses on the first page.
Then, the exercises on the second page are made based on all of the texts that you wrote during lesson AMF-3.
Once done, check the answer key below.
Rewriting sentences into participle clauses
Begin with the verb in the ING-form (present participle):
Realising that it was too difficult, I asked for help.
Stepping on to the stage, I took a deep breath.
Begin with the third form of a verb (past participle):
Seen from afar, he knew that danger was coming.
Impressed by the cooking, she decided to order another dish.
Begin with having + the third form of a verb (perfect participle):
Having lost his phone, he needed to buy a new one.
Having been single for so long, she thought she would never find true love.
Begin with a preposition + ING-form of a verb (present participle):
While sleeping, I accidentally kicked my cat.
Before eating dinner, I washed my hands for 30 seconds.
Precision (these are examples and not necessarily the only correct answers)
AI is an indispensable tool for doctors to find cancer early.
Many people have an apprehension that AI will become hostile.
The detrimental effects of automation include job displacement.
AI is much more sophisticated than the chatbots of the 1950s.
Some fear the eradication of the human race by machines.
Nominalisation (these are examples and not necessarily the only correct answers)
The rapid evolution of AI necessitates the implementation of new laws.
Any harm caused to a human by a robot is a breach of the first law.
Our over-reliance on AI presents significant dangers.
🕒~20-30 minutes ✍️ Read, Watch and Discuss 🎯Reading Comprehension
You are going to read about the
Together with a classmate, complete the vocabulary exercise below by connecting definitions with words.
Take turns reading Pause Giant AI Experiments: An Open Letter together
Watch AI Slop | 67 Minutes
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
Published 22 March, 2023
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.
Used with permission from the authors.
Source: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Choose a path to discuss Pause Giant AI Experiments: An Open Letter & AI Slop 67 Minutes
Both include similar discussion exercises, but at different challenge levels
🕒~40 minutes
✍️ Discuss with your classmate(s)
🎯Discussion Skills
Think about a "deepfake" video. If someone shares it, who is more responsible for the lie: the person who made the AI tool, or the person who clicked "share"? Why?
Do you think the letter is being "too dramatic," or is it right to worry about the end of civilization?
The letter was written when GPT-4 was brand new. Today, we have even faster models. Do you feel like the government and regular people understand AI better now than they did in 2023? Does "more powerful" AI feel more dangerous or just more useful?
🕒~2 lesson and after school
✍️ Read and Reflect
🎯Reading and Discussion Skills
By yourself, read N.K. Jemisin's 2014 Valedictorian
Read the text here: https://www.lightspeedmagazine.com/fiction/valedictorian/
Used with permission from the author.
Next lesson, summarise and compare Valedictorian and Pause Giant AI Experiments: An Open Letter OR AI Slop 67 Minutes using participle clauses:
The verb in the ING-form (present participle)
The third form of a verb (past participle)
Having + the third form of a verb (perfect participle)
A preposition + ING-form of a verb (present participle)
Focus on creating effectice and precise sentence with correct grammar and spelling.
🕒~40 minutes
✍️ Discuss with your classmate(s)
🎯Discussion Skills
The authors believe AI systems pose risks because they can be unpredictable, uncontrollable, and may flood information channels, and potentially outsmart and replace humans. Think of bots, propaganda, automate jobs, or so-called "AI slop".
Critics of the letter often argue it focuses too much on "science fiction" risks (losing control of civilization) and not enough on immediate harms (bias, job displacement, environmental impact). After reading it, do you feel the authors are being "alarmist"?
The letter specifically mentions systems "more powerful than GPT-4." Since then, we have seen models like Claude 3, Gemini 1.5, and GPT-4o. Does the world feel more or less "in control" of these systems than it did in early 2023?
No Homework
Share your discussion