Preparing for the
Unanticipated
AI Applications in Higher Education
Ray Schroeder
UPCEA Sr. Fellow and Strategic Advisor
Ray Schroeder
UPCEA Sr. Fellow and Strategic Advisor
Google image impression of quantum ionic vacuum chamber 11/5/2023
Please Bookmark for Later Use!
This technology is far too important to merely respond in an ad hoc way to releases from a world filled with AI-enhanced program developers. We must create strategic, methodical, predictive processes that take into account capabilities, and reasoned responses, for adoption and support in our institutions. In this process, we have to engage a range of constituencies from developers in this field to students and employers. Collectively, we can effectively share priorities that will enhance and streamline the development and adoption of the most relevant, efficient and effective applications as they become available.
This power-pointless presentation is in a 45-minute presentation framework that is expandable to a 3 hour workshop. As your time and interest permit, follow all the links and test out prompts to fill in the hours. Given the rapidly-developing nature, most sites include the publication date - many are updated regularly. Feel free to use these resources for your own presentations.
Using the GPT Builder in OpenAI's site on 11/14/2023, I created "Ray's Edu AI Advisor" Note that the GPT asked me 10 essay questions in order to align its responses with my basic tone and views on AI in Higher Ed!
https://chat.openai.com/g/g-pLDOh2PHk-ray-s-eduai-advisor
(available to ChatGPT Plus logons)
ABOUT: Ray's EduAI Advisor is a consultant on generative AI in higher education, providing insights on global trends with the ability to reference specific regions or systems, like the U.S. higher education system, when relevant. It maintains a professional yet accessible tone, offering detailed analyses and concise overviews. The GPT emphasizes current, reliable information, avoiding speculation while addressing hypothetical scenarios and future predictions with a clear disclaimer about their speculative nature. It clarifies ambiguous queries but generally delivers informed responses based on the latest global and regional developments in AI. This GPT serves as a well-informed, approachable academic advisor, skilled in discussing AI's diverse impacts on education. It is currently in Beta and only available to ChatGPT Plus users. Below is the recommendation to university administrators as to how to prepare for the coming developments in AI at their universitites:
Prompt: How should universities prepare to respond to new developments?
How should they determine which apps to support and which ones are less likely to sustain over time?
Ray's EduAI Advisor
Universities need to adopt a proactive and strategic approach to respond effectively to new developments in AI and determine which applications to support. Here are some key strategies:
Establish a Dedicated AI Strategy Team: Universities should form teams comprising IT experts, educators, and administrative staff to monitor AI developments and assess their potential impact on higher education. This team should be responsible for developing a comprehensive AI strategy tailored to the university's needs and goals.
Focus on Long-Term Viability and Impact: When evaluating AI applications, universities should consider their long-term sustainability and potential to enhance the educational experience. This includes assessing the developer's reputation, the technology's scalability, and its ability to integrate with existing systems.
Emphasize Ethical and Privacy Considerations: Any AI tool adopted should adhere to ethical guidelines and ensure the privacy and security of student data. Universities should establish clear policies and frameworks for evaluating the ethical implications of AI tools.
Pilot Programs and Continuous Evaluation: Implementing pilot programs can help universities assess the effectiveness of AI applications in a controlled environment before wide-scale adoption. Continuous evaluation of these tools is crucial to understand their impact and make necessary adjustments.
Invest in Staff Training and Development: Universities should invest in training faculty and staff to effectively utilize AI tools. Understanding how to integrate these technologies into the curriculum and administrative processes is key for maximizing their benefits.
Stay Informed About Regulatory Changes: The regulatory landscape for AI in education is evolving. Universities must stay informed about new regulations and standards to ensure compliance and ethical use of AI.
Engage with the Academic Community and Industry Partners: Collaborating with other educational institutions and industry partners can provide insights into successful AI implementations and potential pitfalls. This network can also help in understanding which technologies have staying power.
Student Involvement and Feedback: Including students in the decision-making process and gathering their feedback on AI tools can provide valuable insights into user experience and effectiveness.
Foster a Culture of Innovation: Encouraging a culture that is open to experimentation and innovation can help universities adapt more readily to new technologies and methodologies.
Monitor Trends and Research: Regularly reviewing academic research and industry reports on AI in education will help universities stay up-to-date with the latest developments and best practices.
By adopting these strategies, universities can make informed decisions about which AI applications to support, ensuring they choose tools that not only provide immediate benefits but also have the potential for long-term impact and sustainability in the educational landscape.
Generative Artificial Intelligence Committee Report
We are pleased to share with you the initial report from the U-M Generative Artificial Intelligence Advisory (GAIA) Committee. This group was tasked with assessing the opportunities and challenges posed by generative artificial intelligence (GenAI), particularly as it relates to U-M and our ongoing mission. We want to thank every member of the committee who spent countless hours crafting this report.
Syllabi Policies for AI Generative Tools - Lance Eaton, Google Docs Repository 9/20/2023
This resource is created by Lance Eaton (contact him via email, Twitter, or LinkedIn) for the purposes of sharing and helping other instructors see the range of policies available by other educators to help in the development of their own for navigating AI-Generative Tools (such as ChatGPT, MidJourney, Dall-E, etc). If you would like to revise your submission, please just fill out the form and clarify in the comments that it’s a replacement. Folks are welcomed to download or share this resource or parts of it with their colleagues, institutions, and communities of practice.
The writing of prompts is both an art and a science. It requires you to write as if you were writing or speaking to an intelligent colleague while taking little shared background for granted. David Gewirtz of ZDNet writes, “When talking to a person, it would be natural to expect someone to miss your point initially and require clarification. It would be natural for someone you’re talking to to veer away from the topic at hand and need to be wrangled back on topic. It would be natural to fill in the backstory and ask complex questions, and it would also be natural to have to dig in, restating some of those questions based on the answers you got back.” So, as you compose your first prompt in the conversation, be as clear as you can, including background, even noting what you do not want in the response, and be ready to post follow-up prompts to refine the output.
This guide shares strategies and tactics for getting better results from large language models (sometimes referred to as GPT models) like GPT-4. The methods described here can sometimes be deployed in combination for greater effect. We encourage experimentation to find the methods that work best for you. Some of the examples demonstrated here currently work only with our most capable model, gpt-4. In general, if you find that a model fails at a task and a more capable model is available, it's often worth trying again with the more capable model.
When in doubt about topics related to GenAI, I directly ask an AI chatbot - normally I submit the same prompt to three different bots for comparison.
For example, Bard, Claude 2 and Perplexity run on different large language models; their results commonly differ significantly, supplementing each other. This serves to help fact-check.
Also, don't hesitate to use Google Search to back up facts.
You may want to begin many prompts with a request to "Provide URL citations of sources of responses to this prompt."
My favored bots currently are:
Google Bard https://bard.google.com/ ( Gemini-Pro)
Claude 2 https://claude.ai/chats (Anthropic AI Claude 2 or 2.1 LLM)
Perplexity https://www.perplexity.ai/ (pplx-7b-online and pplx-70b-online + RAG)
ChatGPT https://chat.openai.com/ (I prefer the plus paid version which uses GPT 4-Turbo)
You https://you.com/ (Open AI GPT 4 LLM + RAG (Retrieval-Augmented Generation))
Pi https://pi.ai/talk (social-emotional agent + more, uses Inflection-1 or 2 LLM)
Enter a prompt on one side and it will fill in the other. Compare the results.
Constantly updated. As of 12/1/2023 - 10,132 AIs for tasks icon2,532 tasks and 4,847 jobs. Grouped by topic or by date released and searchable.
https://supertools.therundown.ai/gpts?s=03
One Example: ResearchGPT - AI Research Assistant. Search 200M academic papers from Consensus, get science-based answers, and draft content with accurate citations. By consensus.app
https://chat.openai.com/g/g-bo0FiWLY7-researchgpt
Now, we’re taking the next step on our journey with Gemini, our most capable and general model yet, with state-of-the-art performance across many leading benchmarks. Our first version, Gemini 1.0, is optimized for different sizes: Ultra, Pro and Nano. These are the first models of the Gemini era and the first realization of the vision we had when we formed Google DeepMind earlier this year. This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company. I’m genuinely excited for what’s ahead, and for the opportunities Gemini will unlock for people everywhere.
As Peter Coy writes in his New York Times newsletter, "By harnessing the capabilities of GPT-5, OpenAI Academy will provide an extensive selection of courses and subjects, encompassing disciplines such as mathematics, science, music and art. Users will have the opportunity to engage with GPT-5 as their instructor, tutor, mentor, or companion, receiving tailored feedback and assistance throughout their learning journey. Additionally, the platform will empower users to create and distribute their own courses, facilitating a vibrant community of both learners and educators. This collaborative environment will encourage knowledge sharing and foster a sense of shared growth among its participants. The OpenAI Academy is set to launch in late 2023 and, in line with OpenAI’s mission to ensure the benefits of AI are accessible to all, it will be free for anyone who wants to use it, according to Sam Altman."
Sal Khan, the founder and CEO of Khan Academy, thinks artificial intelligence could spark the greatest positive transformation education has ever seen. He shares the opportunities he sees for students and educators to collaborate with AI tools -- including the potential of a personal AI tutor for every student and an AI teaching assistant for every teacher -- and demos some exciting new features for their educational chatbot, Khanmigo. (with funding from OpenAI)
I have always wanted to offer self-paced courses online, but by previous definition, we could not provide peer discussions, group projects and other student-to-student engagements that are so valuable in other approaches where a cohort progresses together through a class. Now that is entirely possible. Using GenAI bots, instructors can create a cohort of students with differing backgrounds to virtually enter into discussions. For example, one can have a bot respond to discussion questions as a student with rural, urban, conservative, liberal, or a host of other characteristics. They can participate in synchronous or asynchronous exchanges, remembering and referencing prior discussions. The "synthetic students" can also join into group projects. This enhancement will lift one of the constraints on pedagogy in at-scale, self-paced learning. Perhaps you have designed a class with activities for 24 students, but only 18 signed up - you can fill in the open slots with GenAI virtual students. I expect that the practice will become commonplace in 2024. And, early testing has now begun at ASU! (see below)
Last month, educational psychologist Ronald Beghetto asked a group of graduate students and teaching professionals to discuss their work in an unusual way. As well as talking to each other, they conversed with a collection of creativity-focused chatbots that Beghetto had designed and that will soon be hosted on a platform run by his institute, Arizona State University (ASU). The bots are based on the same artificial-intelligence (AI) technology that powers the famous and conversationally fluent ChatGPT. Beghetto prompts the bots to take on various personas to encourage creativity — for example, by deliberately challenging someone’s assumptions. One student discussed various dissertation topics with the chatbots. Lecturers talked about how to design classes.
In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant [agent] powered by artificial intelligence that’s far beyond today’s technology. For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job. These changes are finally starting to happen in a dramatic way.
The potential of artificial intelligence (AI) systems to take over people's jobs is one of the most common fears surrounding the technology. It's partly why many people hesitate to try tools like text and image generators, and why there's such a strong demand for stronger AI regulation. These concerns aren't entirely off-base. The UK's Department of Education recently published a study that found that 10-30% of occupations can be automated by AI, with most of these being white-collar jobs. The study also looked at the jobs most likely to be exposed to AI in general, including tools beyond AI chatbots, like AI-assisted diagnostic tools in healthcare and algorithmic trading in finance, for example. The top five jobs most exposed to AI in general include management consultants and business analysts, financial managers, accountants, and psychologists. See the top six in the chart of 20 below. https://www.zdnet.com/article/these-are-the-jobs-most-likely-to-be-taken-over-by-ai/
Deloitte is using AI to potentially shift existing employees into new roles, Bloomberg reported. The move could help the firm stave off mass layoffs and balance new hiring growth. Deloitte is one among several consulting firms figuring out how to deal with new hires. The rapid advances in AI will bring about significant disruptions to the labor market. The bad news is that some workers are going to see their jobs and skills replaced by automation. The good news, though, is that AI could also help workers move into roles that are more in-demand. And major consulting firms like Deloitte are doing exactly that. The firm is deploying AI technology to assess its current employees' skills and figure out how to move them into areas of the business that are more promising, according to Bloomberg. The firm is essentially making a bet that AI will not only help it avoid mass layoffs, but also moderate its hiring growth in the coming years, Bloomberg reported.
https://www.businessinsider.com/deloitte-trying-to-avoid-layoffs-with-ai-2023-12
Thanks for research & development collaboration:
Katherine Kerpan, Loyola University Chicago
Here is a futuristic image depicting quantum computing powering artificial intelligence. It was created in less than one minute from my text description of an image to represent a quantum computer powering AI.The scene includes a high-tech quantum computer connected to a humanoid robot, set against a digital landscape with neon lights and holographic displays. ChatGPT-4 uses OpenAI's DALL-E to generate images. DALL-E is a highly advanced AI model developed by OpenAI.
Technology history is filled with disruptive pairings -- cellphones plus wireless internet, for example -- but the combination of quantum computing and AI might become the most disruptive duo of all. The major potential of combining quantum computing with AI rests on the former's potential to accelerate the latter's capabilities. Quantum-powered AI could translate into breakthrough solutions for complex problems across a wide range of industries and scientific fields that are beyond the current capabilities of classical computers and traditional AI techniques, said Scott Likens, global AI and innovation technology leader at PwC.
In the ever-evolving digital landscape, a seismic revolution is on the horizon, one that promises to eclipse the impact of all previous technological revolutions combined. This revolution is poised to transform virtually every aspect of our lives. At the heart of this revolution lies quantum computing, a technology that harnesses the mind-boggling principles of quantum mechanics to perform computations that are far beyond the grasp of normal computers. Reliant on transistors processing binary code, conventional computers will give way to quantum computers that solve some complex problems much faster than even powerful supercomputers.
ResearchGPT https://chat.openai.com/g/g-bo0FiWLY7-researchgpt searches 200,000,000 research papers. Here's my prompt and response with clickable short citations:
ResearchGPT
You - please find a working definition of "artificial general intelligence"
ResearchGPT
Talked to chat.consensus.app
Artificial General Intelligence (AGI) is a domain within artificial intelligence that focuses on creating systems capable of performing any intellectual task that a human being can do. Definitions of AGI vary, reflecting the complexity and evolving nature of the field:
AGI is described as an artificial intelligence system that approaches the functional capabilities of natural intelligence, requiring a clear definition and methodological issues for implementation (Dubrovsky, 2021).
AGI systems relate to multiple types of human intelligence, including social, emotional, attentional, and ethical intelligence. They are characterized by their ability to efficiently exchange knowledge and wisdom through cooperation, collaboration, and meta-learning capacities (Cichocki & Kuleshov, 2020).
AGI can operate in unknown environments, under conditions of uncertainty, and autonomously learn and understand problem-domains, differentiating it from more narrowly focused AI applications (Firt, 2020).
AGI studies the feasibility and implementation aspects of artificial systems with domain-specific human-level intelligence, often involving self-organizing map (SOM) neural networks as a core component (Revithis, 2017).
Intelligence, in the context of AGI, is defined as the ability to acquire skills that narrow, diversify, and steer the search for general solutions to problems. This perspective emphasizes the importance of problem-solving capabilities in AGI systems (Rosa & Feyereisl, 2016).
In conclusion, AGI represents a comprehensive and multifaceted approach to AI, aspiring to match or exceed human capabilities in a broad range of intellectual tasks.
===
OpenAI defines AGI as "autonomous systems that outperform humans in most 'economically valuable tasks."
The path to achieving artificial general intelligence (AGI), AI systems with capabilities at least on par with humans in most tasks, remains a topic of debate among scientists. Opinions range from AGI being far away, to possibly emerging within a decade, to “sparks of AGI” already visible in current large language models (LLM). Some researchers even argue that today’s LLMs are AGI. In an effort to bring clarity to the discussion, a team of scientists at Google DeepMind, including Chief AGI Scientist Shane Legg, have proposed a new framework for classifying the capabilities and behavior of AGI systems and their precursors. DeepMind’s framework, like all things concerning AGI, will have its own shortcomings and detractors. But it stands as a comprehensive guide for gauging where we stand on the journey toward developing AI systems capable of surpassing human abilities.
https://venturebeat.com/ai/here-is-how-far-we-are-to-achieving-agi-according-to-deepmind/
During a polarizing (to say the least) interview at the New York Time’s DealBook Summit, Elon Musk said he believes the world is less than three years away from AGI — and will have the ability to “write as good a novel as J.K. Rowling, discover new physics, or invent new technology.”
https://www.therundown.ai/p/happy-birthday-chatgpt (Elon Musk language warning)
The president of tech giant Microsoft (MSFT.O) said there is no chance of super-intelligent artificial intelligence being created within the next 12 months, and cautioned that the technology could be decades away. However, Microsoft President Brad Smith, speaking to reporters in Britain on Thursday, rejected claims of a dangerous breakthrough. "There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said. “What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control,” Smith added.
This paper uses some economic modelling techniques to predict the expected arrival date of AGI- Artificial General Intelligence. The average predicted date from this analysis is 2041, with a likely range of 2032 to 2048, and an estimated earliest possible arrival date of 2028, i.e. just 5 years away. Whereas the average predicted date to get “reasonably close” to AGI is 2034, with a likely range of 2027 to 2041 and estimated earliest date of 2026, i.e. just 3 years away. (N.B. Significant global-scale conflict in the interim could delay this process, but probably not by more than 10 years.) Together this implies that the next quarter century to mid 21stC is likely to see the arrival of AGI and all the challenges and risks this can bring to humanity including of existential and catastrophic change, which current and not just future generations of people will have to face and manage.
But the researchers quickly realized that a model’s complexity wasn’t the only driving factor. Some unexpected abilities could be coaxed out of smaller models with fewer parameters — or trained on smaller data sets — if the data was of sufficiently high quality. In addition, how a query was worded influenced the accuracy of the model’s response. When Dyer and his colleagues posed the movie emoji task using a multiple-choice format, for example, the accuracy improvement was less of a sudden jump and more of a gradual increase with more complexity. And last year, in a paper presented at NeurIPS, the field’s flagship meeting, researchers at Google Brain showed how a model prompted to explain itself (a capacity called chain-of-thought reasoning) could correctly solve a math word problem, while the same model without that prompt could not.
https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/
The technology has limitless potential, Altman says—“I think AGI will be the most powerful technology humanity has yet invented”—particularly in democratizing access to information globally. “If you think about the cost of intelligence and the equality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that,” he said, “it's a very different world. It’s the world that sci-fi has promised us for a long time—and for the first time, I think we could start to see what that’s gonna look like.” Still, “like any other previous powerful technology, that will lead to incredible new things,” he says, “but there are going to be real downsides.”
https://time.com/6344160/a-year-in-time-ceo-interview-sam-altman/
The advent of advanced AI will fundamentally transform many aspects of higher education. Once AGI systems are capable of mastering complex subject matter and communicating naturally with humans, they will be able to take on many traditional academic roles currently filled by human professors and instructors. While this may displace some human roles, AGI also presents new opportunities to enhance education. AI tutors with deep expertise could provide personalized instruction to large numbers of students, enabling more affordable and accessible education globally. AI teaching assistants could free up more of professors' time for higher-level research and curriculum development. More broadly, universities will be challenged to redefine their core purpose and value proposition in an age of intelligent machines. Preserving the humanistic dimensions of higher education will be crucial. Universities may emphasize nurturing wisdom, ethical reasoning, leadership, and lifelong learning skills not easily replicated by AI. Researching AI safety and developing wise governance models for advanced AI may also be key priorities.
As artificial general intelligence (AGI) continues to evolve, it will undoubtedly have a profound impact on higher education. While the precise nature of this impact remains uncertain, there are several potential changes that we can foresee.
1. Shifting the focus from rote learning to critical thinking and creativity.
2. Personalizing education and providing lifelong learning opportunities.
3. Redefining the role of faculty.
4. Changing the way we assess student learning.
5. Exploring new forms of collaboration between universities and industry.
In addition to these specific changes, AGI is also likely to have a more general impact on the culture and ethos of universities. Universities will need to become more adaptable, innovative, and interdisciplinary in order to keep pace with the rapid changes that AGI will bring. They will also need to grapple with the ethical and social implications of AGI, and develop policies that ensure that this powerful technology is used for the benefit of society.
The board, mostly independent of OpenAI’s operations, bound to a “for the good of humanity” mission statement, was critical to the company’s uniqueness. As Altman toured the world in 2023, warning the media and governments about the existential dangers of the technology that he himself was building, he portrayed OpenAI’s unusual for-profit-within-a-nonprofit structure as a firebreak against the irresponsible development of powerful AI. Whatever Altman did with Microsoft’s billions, the board could keep him and other company leaders in check. If he started acting dangerously or against the interests of humanity, in the board’s view, the group could eject him. “The board can fire me, I think that’s important,” Altman told Bloomberg in June. “It turns out that they couldn’t fire him, and that was bad,” says Toby Ord, senior research fellow in philosophy at Oxford University, and a prominent voice among people who warn AI could pose an existential risk to humanity.
https://www.wired.com/story/sam-altman-second-coming-sparks-new-fears-ai-apocalypse/
Ray Schroeder, senior fellow at UPCEA, said, The dozen years ahead will bring the maturing of the relationship between human and artificial intelligence. In many ways, this will foster equity through enhanced education, access to skill development and broader knowledge for all – no matter the gender, race, where people live or their economic status. Education will be delivered through AI-guided online adaptive learning for the most part in the first few years, and more radical ‘virtual knowledge’ will evolve after 2030. This will allow global reach and dissemination without limits of language or disability. The ubiquity of access will not limit the diversity of topics that are addressed.
Relevant articles published in Online: Trending Now at Inside Higher Education
Ray's Observations in Pew Research Center Report on ChatGPT and other steps in the evolution of humans, digital tools and systems by 2035 6/21/2023
Keynote at "Teaching and Learning with AI" - University of Central Florida 9/24/2023
Keynote at "Oktoberbest Faculty Symposium" - University of South Carolina 10/6/2023
Thanks for research & development collaboration:
Katherine Kerpan, Loyola University Chicago
Ray's Daily Curated Reading Lists and Social Media. Blogs with daily updates on the field of online / continuing learning in higher education
Inside Higher Education - "Online: Trending Now" https://www.insidehighered.com/opinion/blogs/online-trending-now
UPCEA Professional, Continuing and Online Education Update http://continuingedupdate.blogspot.com/
Economics and Change in Higher Education by UPCEA http://recessionreality.blogspot.com/
OER Update by UPCEA https://oerupdate.blogspot.com/
Ray's Curation by UPCEA https://rayscuration.blogspot.com/
Alternative Credentials in Higher Ed by UPCEA https://altcred.blogspot.com
Twitter https://twitter.com/rayschroeder
rschr1@uis.edu ~ rayschroeder@gmail.com ~ ray@upcea.edu
Senior Fellow & Strategic Advisor, University Professional and Continuing Education Assn.
Professor Emeritus, University of Illinois Springfield
https://rayschroeder.com