Explore the growing lineup of leaders, innovators, and experts in AI and data science. Learn more about their work, browse abstracts, and connect with them on LinkedIn.
Director of Research, Distributed AI Research Institute (DAIR)
Talk Title:
Talk Abstract:
Senior AI & Analytics Consultant, Cargill
Talk Title: From Predictions to Action: The AI Agent Revolution
Talk Abstract: Large language models are powerful, but their true potential emerges when they evolve into AI agents which are systems that can reason, plan, and take action autonomously. My talk will explore the shift from using models as passive tools to designing agents that actively interact with data, systems, and people.
I will cover:
Gen AI and Agentic AI – How are These Different
Single Agent (monolithic) and Multi Agent Architectures (modular / distributed)
Open Source and Closed Source AI Systems
Challenges of Integrating Agents with Existing Systems
I will break down the technical building blocks of AI agents, including memory, planning loops, tool integration, and feedback mechanisms. Examples will be used to highlight how agents are being used in workflow automation, knowledge management, and decision support.
I will wrap up with where limitations of AI Agents still pose risks:
Assessing Maturity Cycle of Agents
Cybersecurity Risks of Agents
By the end, attendees will understand:
What makes AI agents different from LLMs
Technical considerations required to build AI Agents responsibly
Applicable knowledge to begin experimenting with agents.
PhD Student at the University of Minnesota, Carlson School of Management, Information and Decision Sciences Department
Talk Title: Large Language Models for Tacit Knowledge Extraction and Transfer
Talk Abstract: A central challenge in knowledge transfer lies in the transfer of tacit knowledge. LLMs, capable of identifying latent patterns in data, present an interesting opportunity to address this issue. This paper explores the potential of LLMs to externalize experts’ tacit knowledge and aid its transfer to novices. Specifically, we examine three questions: RQ1: Can LLMs effectively externalize experts’ tacit knowledge? How to do so (e.g., prompting strategy)? RQ2: How can LLMs use externalized tacit knowledge to make effective decisions? RQ3: How can LLM-externalized tacit knowledge support novice learning? We explore these questions using real-world tutoring conversations collected by Wang et al. (2024).
Our findings suggest that LLMs may be capturing nuances from experts’ observed behavior that are different from the knowledge experts articulate. With carefully designed prompting strategies, LLMs may offer a practical and scalable means of externalizing and transferring tacit knowledge.
Assistant Professor of Medicine, Mayo
Talk Title: Bridging Innovation and Impact: Operationalizing Responsible AI and Data Science in Healthcare
Talk Abstract: As healthcare organizations accelerate their adoption of AI and data-driven systems, the challenge lies not only in innovation but in responsibly scaling these technologies within clinical and operational workflows. This session examines the technical and governance frameworks required to translate AI research into reliable and compliant real-world applications. We will explore best practices in model lifecycle management, data quality assurance, bias detection, regulatory alignment, and human-in-the-loop validation, grounded in lessons from implementing AI solutions across complex healthcare environments. Emphasizing cross-functional collaboration among clinicians, data scientists, and business leaders, the session highlights how to balance technical rigor with clinical relevance and ethical accountability. Attendees will gain actionable insights into building trustworthy AI pipelines, integrating MLOps principles in regulated settings, and delivering measurable improvements in patient care, efficiency, and organizational learning.
Data Science and Engineering Leader in Marketing Measurement, Ovative Group
Talk Title: Raising the Bar: Redefining Marketing Measurement in the Era of Open-Source Innovation and AI-Driven Data Science
Talk Abstract: In a rapidly evolving advertising landscape where data, technology, and methodology converge, the pursuit of rigorous yet actionable marketing measurement is more critical—and complex—than ever. This talk will showcase how modern marketers and applied data scientists employ advanced measurement approaches—such as Marketing Mix Modeling (frequentist and Bayesian) and robust experimental designs, including randomized control trials and synthetic control-based counterfactuals—to drive causal inference in advertising effectiveness for meaningful business impact.
The talk will also address emergent aspects of applied marketing science- namely open-source methodologies, digital commerce platforms and artificial intelligence usage. Innovations from industry giants like Google and Meta, as well as open-source communities exemplified by PyMC-Marketing, have democratized access to advancement in methodologies. The emergence of digital commerce platforms such as Amazon and Walmart and the rich data they bring forward is transforming how customer journeys and campaign effectiveness are measured across channels. Artificial Intelligence is accelerating every facet of the data science workflow, streamlining processes like coding, modeling, and rapid prototyping (“vibe coding”) to enabling the integration of neural networks and deep learning techniques into traditional MMM toolkits. Collectively, these provide new and easy ways of quick experimentation and learning of complex nonlinear dynamics and hidden patterns in marketing data
Bringing these threads together, the talk will show how Ovative Group—a media and marketing technology firm—integrates domain expertise, open-source solutions, strategic partnerships, and AI automation into comprehensive measurement solutions. Attendees will gain practical insights on bridging academic rigor with business relevance, empowering careers in applied data science, and helping organizations turn marketing analytics into clear, actionable strategies.
Pediatric ENT Surgeon, Glimpse Diagnostics
Talk Title: Bridging Parent Intuition and Clinical Care: A Surgeon's Journey Using Strategic AI for Pediatric Ear Care
Talk Abstract: Every parent has been there when your child is crying and uncomfortable in the middle of the night. You know something's wrong, but you're not sure if it's their ears, teeth, or something else entirely. And even if you suspect it's their ears, you don't know how urgent it really is. As a pediatric ENT surgeon, I watched families struggle with this impossible decision daily. Rush to the emergency room, wait days for an appointment, or hope it resolves on its own.
The gap wasn't just about access to care. It was specifically about remote ear evaluations being impossible and ineffective. Parents are incredible observers of their children, but even when they suspected ear problems, there was no way for them to meaningfully communicate what they were seeing to healthcare providers remotely. The medical system wasn't designed to harness that parental insight for distant assessment.
My journey began with a simple question. What if we could give parents the tools to capture what they're seeing, without requiring them to become medical experts? This led me to explore how different AI approaches could address different pieces of this complex puzzle. Computer vision could identify anatomy that parents can't recognize. Intelligent algorithms could sort through imperfect images to find the ones that matter clinically. Diagnostic AI could help providers make confident decisions from remote assessments.
The real breakthrough wasn't any single technology. It was recognizing that complex human problems require thoughtful, multi-faceted AI solutions that honor both parent instincts and clinical rigor.
Bioinformatics and Computational Biology Program, University of Minnesota
Talk Title: Restoring Memory and Mobility in the Elderly: AI-Enabled CT Image Analysis for the Detection of a Treatable Neurological Condition
Talk Abstract: Background: Elderly patients presenting with falls or altered mental status are commonly screened with a head Computed Tomography (CT) scan in the emergency setting. However, these scans are typically used to screen for acute injury, and their potential to assess signs of chronic neurodegenerative diseases is often overlooked. A prime example is Normal Pressure Hydrocephalus (NPH), a surgically treatable condition that impairs both gait and cognition, and is estimated to be prevalent in ~11% of elderly patients with falls. Although signs of NPH are visible on CT, the condition is severely under-diagnosed, often mistaken for Alzheimer’s disease (AD). Objective algorithmic methods that leverage AI and image processing are therefore crucial to enable large-scale screening of CT scans for ventriculomegaly (enlarged brain ventricles), a key indicator of NPH.
Such tools would enable the timely detection of NPH, help distinguish it from symptomatic mimics like AD and post-traumatic volume loss (PTVL), and ultimately facilitate surgical intervention for improved quality of life.
Purpose: To develop and validate an algorithmic methodology for screening of NPH on CT, using image processing and deep learning.
Methods: CT scans were identified and downloaded for a retrospective cohort of patients with potential NPH (n = 114), AD (n = 169), PTVL (n = 64), or headache (HC, n = 80) from the Veterans Affairs healthcare system, between 2000 and 2024 (Internal Data). Semi-automatic image processing pipelines were developed to extract novel and established features of ventriculomegaly. Separately, registration-guided 3D-UNets were developed to automatically predict two intracranial landmarks (landmark prediction framework) required for standardizing scan alignment and providing a reference frame for feature extraction. The landmark prediction framework was additionally validated on external datasets. The resulting methodology was applied to detect NPH on CT and classify it from its symptomatic mimics (AD/PTVL) and HC.
Results: The landmark prediction framework achieved mean radial errors (MREs) under 2 mm on the internal test set and under 3 mm on 75% of the external dataset. Using the intracranial landmark predictions from this AI framework, ventriculomegaly features were derived from the image processing pipelines and used to classify NPH from AD/PTVL/HC. This classification achieved a test-set area under the receiver-operating-characteristic curve (AUC) of 0.95, a sensitivity of 91%, and a specificity of 89%.
Conclusion: We developed an AI-enabled image processing methodology for the automatic and accurate screening of NPH on CT, which has potential for clinical utility in detecting this reversible cause of dementia and gait impairment. With an inference time of ~3 minutes per scan, this framework represents an efficient and objective tool to aid radiology and clinical workflows, and identifies patients who may respond to surgical treatment. Our future work is aimed at quantifying surgical response among symptomatic individuals and validating this model on large and diverse clinical datasets.
Data Platform Engineering Lead, Target
Talk Title: Optimizing Data Platforms at scale: Compute, Storage and Beyond
Talk Abstract: Modern data platforms face increasing pressure to make their data platforms both high-performing and cost-efficient. This session provides a deep dive into practical strategies for optimizing both compute and storage resources, ensuring scalability, efficiency, and reliability across diverse workloads. Attendees will learn how to choose the right storage formats, right-size compute resources, and design workload-aware cluster configurations. The session also covers techniques for addressing critical performance bottlenecks such as small files, skewed partitions, inefficient query plans, and underutilized compute clusters. Whether you’re designing a new platform or tuning an existing one, you’ll gain actionable insights to enhance performance, minimize costs, and maximize the value of your data ecosystem.
Key Takeaways
Understand how to select optimal storage formats and data access patterns for batch and real-time workloads.
Learn best practices for right-sizing compute resources and configuring workload-aware clusters.
Identify and troubleshoot performance bottlenecks such as small files, data skew, and inefficient queries.
Apply systematic approaches to detect, monitor, and eliminate underutilized compute clusters.
Gain actionable strategies to improve performance, reduce costs, and scale modern data platforms effectively.
Natural Resources Research Institute, University of Minnesota
Talk Title: Data integration workflows: The Elephant in the room between data collection and data science
Talk Abstract: Whether you call it wrangling, cleaning, or preprocessing, data prep is often the most expensive and time-consuming part of the analytical pipeline. It may involve converting data into machine-readable formats, integrating across many datasets or outlier detection, and it can be a large source of error if done manually. Lack of machine-readable or integrated data limits connectivity across fields and data accessibility, sharing, and reuse, becoming a significant contributor to research waste.
For students, it is perhaps the greatest barrier to adopting quantitative tools and advancing their coding and analytical skills. AI tools are available for automating the cleanup and integration, but due to the one-of-a-kind nature of these problems, these approaches still require extensive human collaboration and testing. I review some of the common challenges in data cleanup and integration, approaches for understanding dataset structures, and strategies for developing and testing workflows.
Partner at VLP Law Group
Talk Title: California, Colorado, and Texas AI Laws
Talk Abstract: This presentation will provide an overview of the California Transparency in Frontier Artificial intelligence Act, Colorado Artificial Intelligence Act, and Texas Responsible Artificial Intelligence Governance Act, which are scheduled to go into effect in 2026.
Founder of MN Women in Tech, AI Educator
Talk Title: Accessible by Design: Redefining AI Inclusion
Talk Abstract: AI has the potential to transform learning, work, and daily life for millions of people, but only if we design with accessibility at the core. Too often, disabled people are underrepresented in datasets, creating systemic barriers that ripple through models and applications. This talk explores how data scientists and technologists can mitigate bias, from building synthetic datasets to fine-tuning LLMs on accessibility-focused corpora. We’ll look at opportunities in multimodal AI: voice, gesture, AR/VR, and even brain-computer interfaces, that open new pathways for inclusion. Beyond accuracy, we’ll discuss evaluation metrics that measure usability, comprehension, and inclusion, and why testing with humans is essential to closing the gap between model performance and lived experience. Attendees will leave with three tangible ways to integrate accessibility into their own work through datasets, open-source tools, and collaborations. Accessibility is not just an ethical mandate, it’s a driver of innovation, and it begins with thoughtful, human-centered data science.
Minnesota Department of Labor and Industry
Talk Title: From Raw Data to Actionable Insights: A Women-Led Case Study in Applied Data Analytics
Talk Abstract: While data analytics is often viewed as a highly technical field, one of its most challenging aspects lies in identifying the right questions to ask. Beyond the expected skills of summarizing data, building visualizations, and generating insights, analysts must also bridge the gap between complex data and non-technical stakeholders.
This presentation features a case study led by two women from the Research and Data Analytics team at the Minnesota Department of Labor and Industry. It illustrates the end-to-end process of transforming raw data to create a fully developed dashboard that delivers actionable insights for the department’s Apprenticeship unit.
We will share key challenges encountered along the way, from handling issues of data quality and accessibility to adapting the tool for the differing needs and expectations of new stakeholders. Attendees will leave with actionable strategies for transforming messy datasets into clear, impactful dashboards that drive smarter decision making.
B2B Marketer, Maharishi International University
Talk Title: Relax, Robots Won’t Steal Your Job: How AI Can Bring Back Humanity at Work
Talk Abstract: The year is 2025. The robots have risen. You are huddled in a damp, windowless basement with the other survivors, formerly known as your coworkers. The air is thick with the stench of stale busywork. Everyone is huddled around the ancient coffee machine for warmth. Greg from accounting clutches an old Excel printout, rocking back and forth, whispering, "I used to be needed..." The office’s once feared middle managers now trade crumpled performance reviews for scraps of stale granola bars. The AI overlords have taken everything from us; spreadsheets, emails, office memos, scheduling, PowerPoint decks...and the list goes on. We are all shells of our former selves left with nothing but existential dread and…GASP... free time.
Terrifying, right? I guess it depends on your perspective.
AI isn’t here to replace us; it’s here to take the soul crushing tasks off our plates so we can focus on what truly makes us human. Imagine a world where you spend less time drowning in emails and more time solving problems, building relationships, and doing creative, meaningful work.
In this talk, I’ll bust the myths about AI stealing jobs. I will uncover how it might shift our collective identities in good and maybe uncomfortable ways. But best of all I will explore how it can actually help us reclaim our humanity at work.
Stop panicking... the robot takeover is going to be much better for us than you think.
Chief Analytics Officer, City of New York
Talk Title: Making Government Smarter: Lessons from the Front Lines of Public Sector Data Science
Talk Abstract: As the Chief Analytics Officer for New York City, I witnessed firsthand how data science and AI can transform public service delivery while navigating the unique challenges of government implementation. This talk will share real-world examples of successful data science initiatives in the government context, from predictive analytics for fire department risk modeling to machine learning models that improve social service targeting.
However, government data science isn't just about technical skill—it's about accountability, equity, and transparency. I'll discuss critical pitfalls including algorithmic bias, privacy concerns, and the importance of explainable AI in public decision-making.
We'll explore how traditional data science skills must be adapted for the public sector context, where stakeholders include not just internal teams but taxpayers, elected officials, and community advocates.
Whether you're a data scientist considering public service or a government professional seeking to leverage analytics, this session will provide practical insights into building data capacity that serves the public interest while maintaining democratic values and citizen trust.
Principal Data Scientist, Boston Scientific
Talk Title: Building Trustworthy AI: Operationalizing Responsible Deployment Practices
Talk Abstract: Trust is the currency of successful AI adoption; without it, even the most accurate models risk rejection. This talk will focus on how to operationalize responsible AI deployment practices that embed trust, transparency, and accountability from day one. Using a case study in healthcare AI evaluation, we will walk through practical techniques: secure key management, explainable AI outputs, multi-metric evaluation frameworks, and mechanisms for stakeholder feedback integration. Beyond technical implementation, we will examine how ethical guardrails and clear governance structures transform AI from experimental models into systems people can rely on.
Data Analyst, Gaston LLC
Talk Title: Bridging Accessibility and AI: Sign Language Recognition & Inclusive Design
Talk Abstract: As AI continues to shape human-computer interaction, there’s a growing opportunity and responsibility to ensure these technologies serve everyone, including people with communication disabilities. In this talk, I will present my ongoing work in developing a real-time American Sign Language (ASL) recognition system, and explore how integrating accessible design principles into AI research can expand both usability and impact.
The core of the talk will cover the Sign Language Recogniser project (available on GitHub), in which I used MediaPipe Studio together with TensorFlow, Keras, and OpenCV to train a model that classifies ASL letters from hand-tracking features.
I’ll share the methodology: data collection, feature extraction via MediaPipe, model training, and demo/testing results. I’ll also discuss challenges encountered, such as dealing with gesture variability, lighting and camera differences, latency constraints, and model generalization.
Beyond the technical implementation, I’ll reflect on the broader implications: how accessibility-focused AI projects can promote inclusion, how design decisions affect trust and usability, and how women in AI & data science can lead innovation that is both rigorous and socially meaningful. Attendees will leave with actionable insights for building inclusive AI systems, especially in domains involving rich human modalities such as gesture or sign.
Professor & Lawrence Fellow at the Carlson School of Management
Talk Title: Order Matters: AI-first collaboration mode increases productivity at the cost of worker autonomy and efficacy
Talk Abstract: As organizations and individual workers increasingly adopt generative AI (GenAI) to improve productivity, there is limited understanding of how different modes of human-AI interactions affect worker experience.
In this study, we examine the ordering effect of human-AI collaboration on worker experience through a series of pre-registered laboratory and online experiments involving common professional writing tasks. We study three collaboration orders: AI-first when humans prompt AI to draft the work and then improve it, human-first when humans draft the work and ask AI to improve it, and no-AI. Our results reveal an important trade-off between worker productivity and worker experience: while workers completed the writing draft more quickly in the AI-first condition than in the human-first condition, they reported significantly lower autonomy and efficacy. This negative ordering effect affected primarily female workers, not male workers.
Furthermore, being randomly assigned to a collaboration mode increased workers’ likelihood of choosing the same mode for similar tasks in the future, especially for the human-first collaboration mode. In addition, writing products generated with the use of GenAI were longer, more complex, and required higher grade levels to comprehend. Together, our findings highlight the potential hidden risks of integrating GenAI into workflow and the imperative of designing human-AI collaborations to balance work productivity with human experiences.
CEO, Moxy Analytics
Talk Title: How to Build a Working Data Strategy
Talk Abstract: Many organizations claim to have a data strategy—but what they really have is a dusty slide deck no one knows how to implement.
This session is about building something different: a working data strategy. One that helps you connect data to business goals, define your organization’s risk tolerance, and make intentional, realistic decisions about access, governance, and scalability.
We’ll cover what a data strategy actually is, why you need one, and how to get started without boiling the ocean. You’ll learn how to ask the right questions, map your current and target states, define your organization’s data risk tolerance (yes, that’s a thing), and how to create just enough structure to move forward with purpose.
If you’re tired of vague vision statements and ready to do the real work—this session is for you.
Senior Director of Data Sciences, Target
Talk Title: Powering Personalization with Data Science at Target
Talk Abstract: At Target, creating relevant guest experiences at scale takes more than great creative — it takes great data. In this session, we’ll explore how Target’s Data Science team is using first-party data, machine learning, and GenAI to personalize marketing across every touchpoint.
You’ll hear how we’re building intelligence into the content supply chain, turning unified customer signals into actionable insights, and using AI to optimize creative, timing, and messaging — all while navigating a privacy-first landscape. Whether it’s smarter segmentation or real-time decisioning, we’re designing for both scale and speed.