Kelly Cohen, University of Cincinnati
Abstract: This lecture provides an introductory application-driven exploration of the two most influential fuzzy inference architectures: the Mamdani and Takagi–Sugeno–Kang (TSK) models. We begin with the classical Mamdani framework, illustrated through the well-known restaurant tipping case study, where crisp inputs (service and food quality) are fuzzified, evaluated through a parallel rule base, aggregated, and defuzzified. The example emphasizes membership function design, rule construction, product implication, aggregation, and defuzzification. The second part of the lecture focuses on the TSK parametric fuzzy model, where rule consequents are linear functions of the inputs rather than fuzzy sets. Using a supervised learning–based function approximation example for a nonlinear cubic function, we demonstrate rule extraction via clustering, linear regression for consequent identification, weighted aggregation, and validation against the original function. By comparing these two approaches through concrete numerical examples, participants will gain a deep understanding of the basic workings of fuzzy inference engines, computational efficiency, rule-base design, and when each architecture is most appropriate for modeling and control applications.
Bio
Professor Kelly Cohen is an Endowed Chair and Professor of Aerospace Engineering at the University of Cincinnati (UC), where he leads the AI Bio Lab and has established one of the most recognized research programs in Fuzzy Systems and Responsible AI in the College of Engineering and Applied Science. With over 39 years of academic and research experience, he has authored more than 300 interdisciplinary publications spanning fuzzy logic, intelligent control, trustworthy AI, aerospace autonomy, and safety-critical systems. The current President of NAFIPS, Professor Cohen is internationally known for advancing fuzzy logic–based frameworks for explainable and trustworthy AI, particularly in high-consequence applications. In January 2026, he received the prestigious K. S. Fu Award from NAFIPS in recognition of his sustained technical contributions, leadership, and service to the field. His work continues to shape the theoretical foundations and practical deployment of fuzzy systems in engineering, medicine, robotics, and autonomous systems.
Andrew Pownuk, University of Texas at El Paso
Abstract: Engineering structures are designed to operate safely despite uncertainties in loads, material properties, geometry, and environmental conditions.
Traditionally, these uncertainties are modeled using probabilistic methods and reliability theory. However, in many practical situations reliable statistical data are unavailable, and uncertainties arise from incomplete knowledge rather than random variability. In such cases, fuzzy set theory provides a powerful alternative mathematical framework for representing and propagating imprecise information.
This lecture presents an overview of fuzzy-set-based methods for safety analysis of engineering structures. We begin with the basic concepts of fuzzy sets and multivalued logic, followed by the extension principle and α-cut representation, which allow transforming fuzzy problems into interval problems. The lecture then discusses interval arithmetic, fuzzy arithmetic, and computational techniques for solving fuzzy and interval equations, including the concepts of the united solution set, tolerable solution set, and controllable solution set.
Further topics include fuzzy differential equations, fuzzy relational equations, and methods for evaluating safety using fuzzy probability and non-probabilistic reliability concepts. The presentation also discusses the limitations of classical probabilistic methods in engineering and illustrates how fuzzy models can complement or replace probabilistic approaches when statistical information is limited.
By reviewing these mathematical tools and computational algorithms, the lecture aims to provide a systematic introduction to the use of fuzzy set theory in the analysis and design of safe engineering systems.
Bio
Dr. Andrew Pownuk began his research on the application of fuzzy set theory in engineering in 1995. He received his Ph.D. in 2001 with a dissertation titled “Applications of Fuzzy Set Theory for the Estimation of Safety of Engineering Structures.” Following his doctoral studies, he collaborated with Chevron Oil Company on the application of fuzzy methods and uncertainty modeling in oil engineering.
Dr. Pownuk has published several research papers on interval equations, fuzzy partial differential equations, and computational methods for uncertainty analysis. A major focus of his research is the integration of different mathematical models of uncertainty—including fuzzy sets, interval analysis, set-valued methods, and probabilistic approaches—to improve the reliability and safety assessment of engineering systems. His academic background spans mathematics, computer science, and engineering, providing a unique interdisciplinary perspective on both the theoretical foundations and practical applications of uncertainty modeling. Dr. Pownuk also serves as Associate Chair of the ISO/TC 98, where he contributes to the development of international standards related to structural safety and reliability.
Scott Dick, University of Alberta in Edmonton
Abstract: Modern deep networks are “black boxes,” and their decision-making process is opaque; sometimes to the point of being incomprehensible to even human experts in the area. This raises a rational concern that an AI’s next move might be something unexpected that could be harmful; it also raises the emotional reaction of fearing the unknown.Both are valid, genuine human reactions, which can critically impact the future of AI. Simply put, if human users do not trust AI algorithms, they will not be used. When we look at the literature on trustworthy AI, one thing that stands out is how important explanations are.A number of studies, going back decades, show that human users are more willing to trust an AI if it’s decisions or actions are explained to them. This session will look at how the idea of having an AI "explain itself" developed from the early days of AI into the present day field of eXplainable Artificial Intelligence (XAI). We will examine two modern approaches (layerwise relevance propagation and LIME) in detail, and then inquire how fuzzy logic can play a role in modern XAI.
Bio
Scott Dick received his B.Sc. degree in 1997, his M.Sc. degree in 1999, and his Ph.D. in 2002, all from the University of South Florida. His Ph.D. dissertation received the USF Outstanding Dissertation Prize in 2003. He was an Assistant Professor from 2002-2008, an Associate Professor from 2008-2018, and has been a Professor since 2018, all with the Electrical and Computer Engineering at the University of Alberta in Edmonton, AB. He was the Director of the Computer Engineering program from 2017-2022.
Dr. Dick’s research interests are in Computational Intelligence, machine learning, explainable artificial intelligence, and the application of these technologies to real-world problems (e.g. Smart Grid, livestock disease management, and intelligent condition monitoring). A major focus is the topic of “complex fuzzy logic,” an extension of type-1 fuzzy logic to complex-valued membership grades. Dr. Dick’s work includes both theoretical analysis of, and constructing machine learning systems based on, this new concept. His work has been funded by NSERC, the Alberta Science and Research Authority, Hewlett-Packard, PRECARN Inc., Enbridge Inc, EPCOR, and Transport Canada. He was the President of the North American Fuzzy Information Processing Society from 2020-2022. He is a member of the IEEE Computational Intelligence Society’s Fuzzy Systems Technical Committee. He is an Area Editor for Evolving Systems and an Associate Editor for Complex and Intelligent Systems. He is a member of the ACM, IEEE, and ASEE.
Vladik Kreinovich, University of Texas at El Paso
Abstract: A lot of poetic nonsense is written about quantum computing: a particle is in two states at the same time, action at a distance, etc. In this talk, on the example one of the simplest quantum algorithms, we explain, in detail, how all this works.
No serious mathematics is needed, just the basic ideas of what is probability and of solving linear equations. During the explanation, we will explain some mathematical-sounding terms like tensor product -- they will turn out to be simple, but feel free to scare your friends (or impress, or scare and impress at the same time) by using these terms with confidence :-)
Bio
Vladik Kreinovich is a professor of computer science at the University of Texas at El Paso.
He was educated at Leningrad State University and received a doctorate in mathematics from the Sobolev Institute of Mathematics, affiliated with Novosibirsk State University in Novosibirsk.
His research spans several areas of computer science, computational statistics and computational mathematics generally, including interval arithmetic, fuzzy mathematics, probability theory, and probability bounds analysis. His research addresses computability issues, algorithm development, verification, and validated numerics for applications in uncertainty processing, data processing, intelligent control, geophysics and other engineering fields. In 2015, the Society For Design and Process Science gave him its Zadeh Award