Meet the Women Shaping
AI & Data Science
Who They Are, What They Do, and What Their Talks Will Cover
Who They Are, What They Do, and What Their Talks Will Cover
- Keynote Speaker -
CEO, ORCAA
Cathy O’Neil earned a Ph.D. in math from Harvard and worked as a math professor at Barnard College before switching over to the private sector, working as a quant for the hedge fund D.E. Shaw and as a data scientist in the New York start-up scene. In 2016 she wrote the book Weapons of Math Destruction: how big data increases inequality and threatens democracy, and in 2022 she wrote The Shame Machine: who profits in the new age of humiliation. She is the CEO of ORCAA, an algorithmic auditing company.
Talk Title: Auditing Algorithms
- Featured Speakers -
Course Instructor, University of St. Thomas, Graduate Programs in Software and Data Science
Jessi Benzel is a data scientist with over 10 years of experience specializing in data analytics and visualization, application and information systems development, data management and governance, and ethical and applied AI. In addition to her industry work, Jessi is a Course Instructor with the Graduate Programs in Software at the University of St. Thomas, where she is part of a team that developed and currently teaches a Master's-level course on AI Ethics. This course integrates technical expertise with a commitment to responsible AI development. Jessi is passionate about shaping the evolution of data analytics and AI by equipping individuals and organizations to embrace ethical innovation that balances the advancement of technology with the well-being of the humans it will ultimately impact.
Talk Title: AI Ethics in Practice
Abstract: In this engaging and timely discussion, Jessi Benzel will explore how organizations can navigate the ethical complexities of AI implementation, focusing on practical strategies to assess and mitigate bias in AI-driven solutions. Drawing from her own research, Jessi will highlight findings on bias in systems like ChatGPT, which show clear differences in responses to resume writing prompts based on race and ethnicity, revealing the potential for AI to perpetuate harmful stereotypes in algorithmic hiring practices. By demonstrating methods for evaluating AI outputs for fairness used in her study, she will offer actionable insights for data scientists and organizational leaders, making AI ethics both relevant and practical. This talk will empower participants to not only understand ethical concerns but also apply concrete steps to ethically develop and deploy AI technologies that promote fairness and inclusivity across domains.
Research Assistant, University of Minnesota Twin Cities, Department of BICB
Scarlett is a recent Master's graduate in the Bioinformatics and Computational Biology field, where she has acquired a strong foundation in analyzing biological data and applying computational methods to solve complex problems. She is excited to embark on a Ph.D. journey in the same program, focusing on identifying the optimal drill point for twist-drill craniotomy in subdural hematoma patients. By leveraging her expertise in bioinformatics and computational approaches, she aims to contribute to the development of precise and effective surgical techniques that can enhance patient outcomes and revolutionize neurosurgical practices. She is passionate about pushing the boundaries of knowledge and making a meaningful impact in the field of biomedical research.
Talk Title: Automated Chronic Subdural Hematoma Segmentation with 3D U-Net
Abstract: Background and Objective: Traumatic brain injury (TBI) is highly prevalent within the veteran population and can result in chronic subdural hematoma (cSDH). Veterans’ projected incidence rate of cSDH was reported to be 121.4 per 100,000 persons per year, 1,2 significantly higher than that of civilians which range from 1.72 to 20.62 per 100,000 persons per year. The management of cSDHs heavily relies on computed tomography (CT) imaging, with serial scans often acquired to guide treatment decisions.3 In this retrospective study, our objective is to develop a machine learning (ML) model, based on a deep-learning algorithm, to automate cSDH segmentation to improve and assist clinical diagnosis. We train our model based on an available veteran cohort from the Veteran Affairs New York Harbor Healthcare System (NYHHS) to account for population-specific factors when analyzing cSDH incidence trends.1,2
Methods: A total of 65 CT scans were obtained from NYHHS Veteran Affairs Hospital, identified by CPT codes 6110-61108, corresponding to Subdural Evacuating Port System (SEPS) drainage procedures. The hematoma volume ranged from 43.25ml to 484ml across the patients. Among these, 10 patients presented with bilateral cSDH, while the remaining patients had unilateral cSDH. Manual segmentation of the cSDH region was carried out, confirmed by neurosurgeon assessment, to establish the ground truth. We employed a 3D U-Net architecture in our machine learning model development to automate the segmentation of cSDHs utilizing the manually segmented CT scans. This model was trained in the 3D U-Net architecture using TensorFlow that includes contracting and expanding paths with convolutional and up sampling layers, respectively, to capture features at multiple scales.
Result: Our best performing model demonstrated remarkable efficacy, achieving a mean Intersection over Union (IOU) of 0.8743 and a Dice similarity coefficient (DICE) score of 0.9214 on the validation set. Cases where the model performed well typically exhibited clear and distinct segmentation of the cSDH region, closely aligning with the manually segmented ground truth. However, certain cases posed challenges for the model, particularly those with faintly defined hematoma boundaries. These instances often resulted in suboptimal segmentation, highlighting areas for potential model refinement and improvement.
Conclusion: This trained 3D U-Net model’s performance indicates a high level of accuracy in delineating cSDH volumes from surrounding brain tissues, providing clinicians with a valuable tool for diagnostic assistance and treatment planning. The successful implementation of the 3D U-Net model underscores its potential as a reliable and efficient method for automated segmentation of CSDH, offering promise for improved patient care and clinical outcomes in the management of this prevalent neurological condition.
PhD Student at the University of Minnesota, Carlson School of Management, Information and Decision Sciences Department
Mina is a second-year PhD student in the Information and Decision Sciences Department at the University of Minnesota's Carlson School of Management. Her research focuses on Human-AI Collaboration, specifically how AI influences human decision-making. Mina aims to better understand strategies to improve decision-making when AI is integrated into processes. Before starting her PhD, she worked as an Associate at LG Electronics in South Korea and earned her Master's degree in Management Information Systems from Seoul National University.
Talk Title: The Past Tense of “Feel” is “Feeled”: The Effect of Generative AI Decision Aid on Conformity
Abstract: Artificial Intelligence (AI) has often been incorporated into decision-making processes as decision aids, helping humans make choices in complex environments. Classic studies on conformity have shown that in certain situations with group pressures, people knowingly make the wrong choices. In such situations, will support from an AI decision aid change people's tendency of conformity? Our study used an Asch-type peer pressure task to understand how a Generative AI (GenAI) decision aid influences people’s conformity behavior. More specifically, we examine the effects of the correctness of the GenAI decision aid’s recommendations and the presence of explanations. Results showed that a GenAI decision aid’s correct recommendations alone (i.e., without explanations) can offset conformity pressures. Interestingly, the effect of explanations depends on whether the GenAI decision aid provides correct or incorrect recommendations. While having an explanation for correct recommendations does not significantly affect conformity, having an explanation for incorrect recommendations significantly worsens conformity. Our study highlights that GenAI's recommendations can be a double-edged sword for decision-makers under social pressure. Although correct recommendations from GenAI can successfully offset conformity pressure, incorrect recommendations, especially when coupled with hallucinated explanations, can further mislead decision-makers.
Founder, Aria Impressions, LLC
Alex Cunliffe (she/her) is a machine learning engineer with 15 years of experience in the industry. She has worked on a range of machine learning subfields and technologies, from imaging to 3D modeling to natural language processing and LLMs. In 2023, Alex founded Aria Impressions, LLC, where she uses generative AI to create ultra-personalized, illustrated children’s books. Alex holds a PhD in Medical Physics from the University of Chicago. She lives in St. Paul with her husband, young daughters, a dog, and two orange cats.
Talk Title: Creating personalized children's books with generative AI
Abstract: In June 2023, riding the wave of ChatGPT, I started experimenting with generative AI to write and illustrate stories for my daughter. What began as a fun experiment turned into a small business, where I now sell ultra-personalized storybooks to families in the US and Canada. Over the past year, I’ve made over 50 books for friends and strangers, featuring stories like a deep-sea adventure, a magical kitchen, and a fearless "pickle princess" who saves her kingdom from a pickle-loving dragon.
In this talk, I’ll share:
1) My process for writing and illustrating these personalized stories
2) Lessons learned from launching a small business
3) Current challenges and areas for improvement
PhD Candidate, University of Minnesota, Computer Science
My research aims to combine NLP with computational social science. Specifically, I am interested in learning about people—how user preferences, social cues, and contextual factors influence and drive user behavior—in online social media settings. I am also interested in exploring the stylistic analysis of user text content. To pursue my interest in computational social science, I have adopted a variety of deep learning, natural language processing, graphs, and statistical methods.
Talk Title: Under the Surface: Tracking the Artifactuality of LLM-Generated Data
Abstract: LLMs are increasingly employed to create a variety of outputs, including annotations, preferences, instruction prompts, simulated dialogues, and free text. As these forms of LLM-generated data often intersect in their application, they exert mutual influence on each other and raise significant concerns about the quality and diversity of the artificial data incorporated into training cycles, leading to an artificial data ecosystem. We conducted extensive stress tests on the quality and implications of LLM-generated artificial data, comparing it with human data across various existing benchmarks. Despite artificial data's capability to match human performance, this paper reveals significant hidden disparities, especially in complex tasks where LLMs often miss the nuanced understanding of intrinsic human-generated content. This study critically examines diverse LLM-generated data and emphasizes the need for ethical practices in data creation and when using LLMs. It highlights the LLMs' shortcomings in replicating human traits and behaviors, underscoring the importance of addressing biases and artifacts produced in LLM-generated content for future research and development.
Machine Learning Engineer, McKesson
Renee Ernst is a seasoned Data Scientist and Engineer with over a decade of experience transforming Machine Learning and AI concepts into impactful business realities. Skilled in crafting and deploying innovative ML/AI solutions, she excels in leading teams and fostering a culture of innovation. Currently, she's at the forefront of McKesson's MLOps platform development.
Renee's unique blend of experience in the social sciences, data science, and engineering ensures her solutions are both data-driven and human-centered. Her PhD in Social Psychology and Statistics from Iowa State University provides a strong foundation for understanding complex behaviors and designing AI that truly benefits people.
Talk Title: Empowering Data Scientists to Production
Abstract: As companies increasingly rely on machine learning and AI, the challenge of transitioning data science projects from research to production remains a significant hurdle. Ineffective communication and collaboration between data scientists, engineers, and DevOps often hinder this process. However, by empowering data scientists with the right training and tools, we can streamline this transition and accelerate the development of successful data-driven products.
In this talk, I'll share insights from my experience working with highly successful data science teams. I'll discuss how we leveraged a combination of training and tooling to reduce time-to-production and improve model performance.
Key topics include:
Early adoption of MLOps best practices: Lower manual effort, reduced deployment time, improved model reliability and reproducibility.
Effective notebook usage: Strategies for organizing, versioning, and appropriate usage, including best practices for collaboration and automation.
Enforcing code standards: Tools and techniques for ensuring code quality and maintainability, such as linters, code reviews, and automated testing.
Addressing common challenges: Discussing specific pain points faced by data scientists, such as data quality issues, model deployment complexities, and infrastructure limitations, and how MLOps based approaches helped overcome them.
By attending this talk, you'll gain practical insights into how to empower your data science teams and drive successful AI initiatives.
Responsible AI Research Strategist, Invisiblehand.co
Grace Ezzell has 8+ years of experience working in operations, partnerships, and product for early stage startups in fintech, digital currencies, and privacy tech. Grace has international experience working for Gaza’s first startup accelerator (powered by MercyCorps) and digital rights NGOs in the Europe and the Middle East. Most recently, Grace worked on responsible AI for a large social media company. Her interests are in the EU AI Act, AI explainability, and AI fairness. Grace’s approach to technology policy is informed by her experiences supporting products with global user bases and a focus on usability, accessibility, and harm reduction. Grace holds a Bachelors in Economics from the University of Minnesota and a Masters in Digital Currencies from the University of Nicosia.
Talk Title: The EU AI Act, Setting the Standard for Consumer Rights in AI
Abstract: This presentation emphasizes the importance of AI explainability and the user's right to contest AI decisions within high risk AI usage categories in the context of the EU AI Act.
Lead Data Scientist, General Mills
Callie currently serves as a Lead Data Scientist at General Mills and has over 10 years of experience building and deploying machine learning models at scale. Throughout her career, she has worked on a wide variety of use cases, including behavioral profiling, text classification, and digital personalization.
Talk Title: Beyond Tokens and Pixels: Using Embeddings to Unlock Consumer Insights
Abstract: Deep learning applications are growing exponentially, yet most embedding examples stay within the realm of natural language processing or computer vision. This talk will highlight a wide variety of ways General Mills creates embeddings by applying deep learning on tabular data to better understand market trends and consumer behavior. Attendees will leave with a working knowledge of embeddings and inspiration to apply them in new and creative ways.
Lead - AI and Data Science, The Toro Company
Priyanka Ghosh leads an AI and Data Science team at The Toro Company where she champions innovation and drives strategic and transformative solutions that aligns technology with business objectives. Priyanka holds a Master’s degree in Computer Science and a MBA in Marketing. With a relentless passion for emerging technologies and a strong foundation in AI, ML and advanced analytics, Priyanka stays at the forefront of AI advancements, and constantly explore new ways to leverage technology for greater impact. She regularly speaks at industry conferences, and deeply committed to mentorship and leadership development. She actively mentors’ university students and serve on the advisory boards of an AI Startup and the Women in Leadership Program at Minnesota State University, Mankato.
Talk Title: From Data to Decisions and Automation: AI applications in Manufacturing
Abstract: In this presentation I will share how AI and Data Science is transforming the manufacturing landscape, highlighting the shift from a traditional manufacturing to a data centric company.
I will share the use-cases where we are embracing AI (Predictive and Generative) for automation, operational efficiency, and decision-making.
Will deep dive into applications like predictive maintenance, quality control, fraud detection, price optimization, LLM powered AI bots and more.
Sr. Engineering Director, Seagate Technology
Stephanie obtained her Ph. D. in Electrical Engineering from the University of Minnesota in 2010. Her graduate research involved modeling magnetic recording media and spin‐torque based structures. After graduate school, she started working for Recording Head Operations at Seagate Technology in Bloomington, Minnesota as a read transducer designer. In 2015, she joined Seagate Research in Shakopee Minnesota, to model advanced Heat Assisted Magnetic Recording. Now, she is the Sr. Director of the Data Storage and Memory Devices group at Seagate Research. This team is responsible for conducting research on advanced hard drive, and other storage and memory, technologies through experimentation and physics-based simulation."
Talk Title: Opportunities for AI to Accelerate Learning and Discovery Throughout the Technology Funnel
Abstract: Discovering novel technologies can be a slow and resource-intensive process, and artificial intelligence (AI) techniques have the potential to rapidly accelerate the pace of innovation. Seagate, a data storage provider, leverages AI throughout the company from product marketing to wafer manufacturing, but AI’s role in advanced device research is especially promising. Exploring all potential physical phenomena and materials compositions through experiment and simulation is practically infeasible, so automating analysis and prioritization of promising candidates would both expand the scope and increase the rate of innovation. Through case studies of internal projects and university collaborations, the impact of AI from the start to the end of the innovation funnel will be shown. The broadness of topics under the AI for discovery umbrella can be overwhelming, but concrete examples of AI being implemented in industrial research will demonstrate both the benefit and potential for deep learning and data analysis tools.
Head of Technology Strategy & Partnerships, Hewlett Packard Enterprise
Dr. Lindsey Hillesheim is a widely read systems thinker adept at identifying trends and signals to inform short- to long-term technology & innovation strategies. She has 16+ years of experience in technology development & strategy, business development, research, and policy -- spanning biotechnology, AI, and systems engineering. Previous roles include senior technology & innovation leadership roles at HPE, ATP-Bio (NSF Engineering Research Center based at the University of Minnesota), Cray, Adventium Labs, and Strategic Analysis Inc., where she assessed and explored technology for DARPA. Lindsey was AAAS Science and Technology Policy fellow at the U.S. State Department, and served on the Launch Minnesota Advisory Board, a state funded program to support the startup ecosystem in Minnesota. She earned her B.S. in physics and humanities from Valparaiso University and her Ph.D in physics, with a graduate minor in history & philosophy of science from the University of Minnesota. She is an active angel investor and involved in several Al policy and governance forums.
Talk Title: Security & Governance for AI Applications: Introduction & Practical Advice
Abstract: This talk will provide an orientation to security requirements for AI applications and how they differ from traditional applications, and what technologies are required to address these (and how it relates AI safety). The remainder of the session will focus on a practical implementation of a centralized LLM firewall to support compliance & governance for Gen AI applications.
Data Scientist, Element Fleet Management
Wissal Jawad is a Data Scientist at Element Fleet Management, leveraging her Master's degree in Data Analytics to drive business growth through ML and AI solutions. Her area of specialization is natural language processing. She is committed to exploring the latest advancements in LLM and GenAI and is dedicated to staying at the forefront of technology.
Talk Title: Data Science Assistant
Abstract: This project will use the LangChain framework on top of an LLM to create a Data Science Assistant that will automate the following tasks for us by converting the Natural Language requests we provide into the code required to execute the task:
1. Data Analysis
2. Data Science & Machine Learning Modeling
3. Relational Database Querying
4. RAG and Serpapi Tool for Contextual Answers
There are obvious reasons why such a project could be highly significant to business / research. By automating the code tasks common to various operations in Data Analysis, Data Science or Database Querying, such a solution can help with high level people to self serve and get their answers questions rather than send in a request which has the potential to save significant time, and also open up Data Analysis and Data Science to a more general audience that may not be familiar with the code involved.
Partner, VLP Law Group LLP, Minneapolis
Melissa is recognized in Chambers Global Privacy & Data Security and USA - Nationwide Privacy & Data Security: Cybersecurity, USA - Nationwide Privacy & Data Security: Privacy, and USA - Nationwide Technology in 2024 and in Chambers USA - Nationwide Privacy & Data Security and USA - Nationwide Technology in 2022 and 2023 for having “an impressively well-rounded privacy practice,” “handling data breach incidents,” and for being “a highly regarded attorney whose broad practice exhibits strength across a number of technology-related issues” and routinely advising “on M&A, data security issues and related regulatory matters.”
Melissa advises companies on incident response, tabletop exercises, and crisis management, security programs and agreements, insurance policy matters, artificial intelligence, privacy and cyber governance, privacy policies and terms, data security addenda, data processing agreements, business associate agreements, merger and acquisitions, federal and state artificial intelligence, privacy, and data security laws and artificial intelligence and big data.
Talk Title: Colorado's First-of-Its-Kind Artificial Intelligence Law
Abstract: Colorado's new artificial intelligence law is the first-of-its kind and will take effect on February 1, 2026. This presentation will cover when this law applies, what this law requires, and how this law will be enforced.
MN Women in Tech
Talk Title: UpSkill MN: Visual Communications + AI
Abstract: I will discuss the work we (MTN.org) have been doing to introduce the basics of visual communications, creative coding, and generative AI to folks across Minnesota, and how we are creating pathways for upskilling and continued learning for people from underestimated communities through the upcoming North Star Program.
Senior Director Product/ AI COE Ethics Lead, Strategic Education, Inc.
Nell Meshcheryakov is the Senior Director of Strategic Operations and Initiatives at SEI and serves as the Ethical AI lead for the AI Center of Excellence. With an extensive background spanning roles in academics, operations, transformation, innovation, IT, and strategy, Nell brings a holistic approach to the field of Responsible AI. Nell has been passionate about AI since she created her first model in 2017, completed a certificate in Machine Learning and AI from MIT (2020), and focused her dissertation on Algorithmic Bias and Machine Learning (2021). She is currently pursuing a Master’s in Systems Engineering from Harvard. Her interdisciplinary expertise drives her mission to build responsible, impactful AI systems, focusing on fairness, transparency, and accountability.
Talk Title: Balancing Innovation and Ethical Impact: A Guidebook on Responsible Use of Ai
Abstract: In an era where Artificial Intelligence (AI) drives innovation across industries, the need to balance technological advancement with ethical responsibility has never been more critical. This talk, "Balancing Innovation and Ethical Impact: A Guidebook on Responsible Use of AI," delves into the complexities of integrating AI into business and society while safeguarding ethical standards. It explores the challenges of mitigating bias in AI systems, ensuring transparency, and implementing effective regulatory measures. This talk will also reference a qualitative study done by the author for her dissertation in 2020 concerning professional viewpoints of regulation and AI. By providing practical insights and strategies, this talk aims to equip professionals with the knowledge to harness AI's potential responsibly, fostering innovation that aligns with societal values and public trust.
Assistant Professor, University of Wisconsin Stout
Afroza Polin is an Assistant Professor of Statistics at University of Wisconsin-Stout and received her doctorate from Bowling Green State University, Ohio. She received her bachelor’s degree in Statistics from University of Dhaka, Bangladesh and Master’s degree in Biostatistics from University of Hasselt, Belgium. Her research generally focuses on high dimensional data, longitudinal data analysis and survival analysis. Her Ph.D. thesis was on simultaneous inferences for high dimensional longitudinal data. She is teaching Statistics classes including mathematical statistics, regression analysis, probability at undergraduate and graduate level. She loves to involve students into research projects according to their education levels.
Talk Title: Multiple Testing in High Dimensional and Correlated Data
Abstract: In this study we discuss the technique to simultaneous inference for high dimensional longitudinal data. We consider a longitudinal data setting where number of predictors is much higher than the number of sample size. We use Cholesky decomposition to deal with the correlation between repeated measures, and modified lasso regression model to construct test statistic and confidence interval for the regression parameters. Finally, we study the technique of making the adjustments for simultaneous inference. We conduct simulation studies to assess the results of the proposed techniques.
Director of Data Sciences,Target Corporation
Samantha specializes in bridging advanced mathematics to complex systems. She love leading diverse, interdisciplinary teams of scientists and artists. She have 10+ years at Target solving supply chain and product availability with data science.
Talk Title: Product Availability: Using AI to get Milk on the Shelf
Abstract: Have you ever skipped buying milk because you thought you had some at home? But then you didn't have any milk at home? Oops! This is an example of unknown out-of-stocks. Our systematic (or mental) belief of our inventory doesn't match the reality of our empty shelves. My team has spent the last 3 years building an enterprise-wide, data science solution to unknown out-of-stocks, one of the largest inventory problems in retail today. But applications don't stop at retail.
Viewing the problem differently: Do you have faulty, but extremely valuable data? You might benefit through an ensemble modeling approach to your data cleansing. With strong acceptance criteria for your models and clear conflict resolution, your data might benefit from a probabilistic approach to correcting data via data science.
In this talk I'll present some key concepts for fixing data by applying ensemble methodology to probabilistic data science algorithms. In the process of investigating the larger problem, we might even get milk in your refrigerator.
Founder and Educator, Indiana University Bloomington, IEEE, onestopforcloud
Ayisha Tabbassum is a Certified Multi-Cloud Architecture Enthusiast, Senior IEEE Member, Public Speaker, and Educator based in Eden Prairie, Minnesota. With a master's in computer science from Indiana University Bloomington, Ayisha currently serves as a Cloud architect driving Cloud Operations and Multi-Cloud Architecture , where she drives cloud infrastructure solutions on AWS, Azure, and GCP. As the Founder and CEO of One Stop for Cloud, she leads a premier EdTech company focused on simplifying cloud learning.
Ayisha is a seasoned architect with expertise in Cloud, FinOps, SRE, Security, and Observability. Her technical acumen spans automation, CI/CD, and cloud security tools like Wiz and AWS Security Hub. She has presented at numerous conferences, authored several cloud architecture articles, and actively shares insights on Medium. Recognized for her contributions to the tech community, Ayisha is a passionate advocate for AI and cloud adoption in diverse domains.
Talk Title: Revolutionizing Architectural Design with AI and Data Science: Innovations and Ethical Considerations
Abstract: As advancements in artificial intelligence (AI) and data science continue to accelerate, their application in architectural design has become a focal point for innovation. This presentation explores the latest breakthroughs in AI-driven design models that are transforming architectural practices, enhancing creativity, and optimizing building performance. By leveraging vast datasets and sophisticated algorithms, architects can now predict design outcomes, optimize spatial layouts, and personalize structures with unprecedented accuracy.
Key topics include the integration of machine learning techniques in building information modeling (BIM), the development of predictive analytics for sustainable design, and the role of AI in automating complex design processes. Additionally, the session will address the ethical considerations inherent in the deployment of AI in architecture, including issues of data privacy, algorithmic bias, and the importance of maintaining human oversight in design decision-making.
Participants will gain insights into the practical implementation of AI technologies in architectural settings, the challenges faced, and the future potential of predictive analytics to revolutionize architectural design. This presentation aims to provide a comprehensive overview of how AI and data science are reshaping the landscape of architecture, driving improvements in efficiency, creativity, and sustainability.