Dr. Andrés Tellez speaks about using Generative AI as an Imagination Companion.
Dr. Andrés Téllez is an Assistant Professor of Design Studies at North Carolina State University with over 15 years of international academic experience in the US and Colombia, specializing in research at the nexus of design, education, and technology with a focus on addressing complex societal challenges. He teaches a range of courses, including human-centered design and design thinking, drawing on his substantial administrative experience as a Design Foundations Coordinator at Appalachian State University and as the former Head of the Product Design Department at Universidad Jorge Tadeo Lozano in Bogotá, where he spearheaded curriculum innovation and accreditation efforts. Holding a Ph.D. in Design from NC State and degrees from Universidad de los Andes, his diverse background offers a broad perspective that enriches his work in design education and research.
Jason Xiong, Associate Dean for Advanced Studies in Business, is a leading voice in integrating AI into business analytics and graduate education. In this Think Series session, he explored how educators can adapt to the AI era by emphasizing understanding over memorization, encouraging students to use—but also question—AI tools. Jason shared classroom strategies such as requiring students to cite AI use, submit prompts, and compare outputs from multiple tools to detect errors and hallucinations. He highlighted the growing expectation for graduates to be fluent in AI applications across industries, capable of managing complex tasks, verifying data accuracy, and presenting their insights effectively. Ultimately, Jason underscored that AI should enhance—not replace—critical thinking, creativity, and integrity in both learning and professional practice.
AI in Business Education & Analytics: As AI transforms industries, business schools must shift from rote learning to developing students’ ability to understand, verify, and apply AI-driven insights rather than merely retrieving correct answers.
Responsible Use & Academic Integrity: Faculty are encouraged to allow AI use but require citation of AI tools and prompts, reinforcing transparency and ethical application. Incorporating formal AI policy language (e.g., from Academic Affairs or peer institutions) sets clear expectations and protects academic integrity.
Critical Thinking & Multi-Tool Comparison: Students should be trained to cross-examine results from multiple AI tools (e.g., ChatGPT, Copilot, Gemini) to identify inconsistencies and hallucinations—cultivating discernment rather than blind trust in automated outputs.
Collaborative & Authentic Learning: Group projects and open-resource assessments mirror real-world problem-solving, encouraging students to integrate AI as a collaborator while remaining accountable for accuracy and originality.
Industry Alignment & Workforce Readiness: Employers now expect graduates to be AI-fluent, capable of using several tools to automate tasks, analyze data, and communicate findings effectively. Mastery lies not in formulas but in prompt engineering, data validation, and professional presentation.
Human Judgment & Creativity: While AI enhances efficiency, it cannot replace communication, empathy, or ethical reasoning—skills central to leadership and innovation. Educators must preserve these human dimensions within AI-enabled curricula.
Evolving Roles & Future Implications: As AI automates routine work, educators should prepare students for higher-order responsibilities: managing multi-agent systems, verifying outputs, and exercising integrity when no one is watching.
Core Imperative: AI is a tool for thought, not a substitute for it. The next generation of business leaders must learn to think with AI—balancing automation with critical inquiry, creativity, and ethical decision-making.
See video and transcript here.
Password shared in Google Chat.
Michele is an AI policy expert with experience at the Red Cross, Google, and as a global tech consultant. In her Think Series talk, Michele explored AI’s promise and challenges—from its transformative role in healthcare and humanitarian response to its impact on global workforces. She highlighted pressing issues in AI governance, cybersecurity, and ethics, underscoring the need for harmonized international standards and responsible use. Michele also shared insights on AI’s role in policy advocacy, emphasizing how these tools can support—rather than replace—human expertise.
AI in Healthcare & Humanitarian Response: From diagnosis to famine detection, AI offers powerful tools but raises urgent questions about privacy and protections for vulnerable populations.
Global Workforce Divide: AI complements high-skill roles in developed nations but risks displacing lower-skill jobs in developing economies, widening inequality.
Cognitive Offloading & Creativity Loss: Over-reliance on AI may erode critical thinking and originality, creating long-term risks for education and workforce development.
Governance Blind Spots: Humanitarian and crisis settings highlight gaps in AI oversight, where poorly managed deployment can amplify harm.
Cybersecurity & Trust: Expanding AI systems heighten risks of misuse, underscoring the need for robust data security and accountability frameworks.
Fragmented U.S. Regulation: With no national AI law, the U.S. relies on voluntary standards, leaving gaps compared to emerging global governance efforts.
Toward Global Ethical Standards: International bodies like the OECD and UN are exploring harmonized frameworks, signaling the need for AI “ethical treaties.”
Human Agency & Policy Advocacy: AI can accelerate advocacy and productivity, but human judgment, domain expertise, and authentic creativity remain essential.
See video and transcript here.
Password shared in Google Chat.
The conversation with James centered on AI in the workplace, highlighting his experiences with implementation and related challenges. Discussion covered AI’s impact on employment, education, and business—including career progression, data privacy, and the future of traditional institutions. It concluded with insights on opportunities for local news organizations and the digital marketing challenges of adapting to evolving technologies.
AI Reshaping Entry-Level Work: Automation is reducing traditional pathways into careers, raising concerns about talent pipelines and long-term leadership development.
Workplace Efficiency vs. Job Displacement: AI tools boost productivity, especially in document-heavy roles, but often replace junior roles critical for early career growth.
Privacy Tensions in an AI Era: Younger generations show shifting attitudes toward data privacy; AI deployment raises new ethical and compliance challenges.
Rethinking Education & Credentials: The rising cost of higher ed and AI’s impact on knowledge work may accelerate a shift toward skills-based learning and alternative credentialing.
Adoption > ROI Metrics: True AI success depends less on financial metrics and more on user adoption, change management, and reclaimed time.
AI as a Strategic Tool in Media: Local news outlets could monetize data by supplying AI models, offering a path to relevance in a digital ecosystem.
Digital Marketing Dilemma: As AI reshapes ad targeting, organizations face urgent decisions about data ownership, monetization, and long-term credibility.