Currently, my research focuses on fostering life-long learning with artificial-intelligence (AI) technologies and executive function skill improvements. The life-long learning process is best nurtured with both internal and external capacities. Internal capacity can be improved with better focus, cognitive flexibility, strategic thinking, a growth mindset, resilience and emotional well-being. Externally, one’s capacity for learning can be accelerated with timely, personalized feedback and smart AI scaffolds. My studies on media multitasking, cognitive overload, cognitive offloading, and multimedia design have led me to explore the bridges between human intelligence and artificial intelligence, and to explore human-machine interactions and collaborations. Recently, I completed a co-edited book, entitled “Bridging Human Intelligence and Artificial Intelligence”, with my colleagues Dr. Mark Albert and Dr. Mike Spector. I am working on a few projects investigating the use of Minecraft as a productive learning platform to nurture creativity and to develop executive function skills. My goal is to contribute to adaptive AI systems to support life-span and life-long learning.
I have been interested in how people learn, and in how to help people to be life-long learners with intrinsic and extrinsic motivations. My personal journey could be characterized as a global arc through linguistics, to mass media, educational theory and practice, into instructional technology and media. The core theme of this path highlights using media and technologies to support and extend human knowledge, skills, communications, and expressions. There are different challenges for learning during different times in history. One of the current challenges and potentials is the rapid development of artificial intelligence (AI). The potentials of AI offer excitement, fear, and uncertainties. Education and life-long learning play a key role in defining this process. As an educator, I aim to actively participate in the evolution and development of newer AI technologies so that we can promote ethical and equitable AI potentials for the goodness and betterment of all humans and societies.
Several theories have guided my scholarship, and multiple scholars have influenced my thinking. They include multimedia design (e.g., Richard Meyer), cognitive load (e.g., John Sweller), media theories (e.g., Marshall McLuhan), constructivism (e.g., John Dewey; Jean Piaget; Lev Vygotsky), and transformative learning (e.g., Patricia Cranton; Jack Mezirow). As I explore the relationship between human capacity and the tools and technologies that support human learning, I am reminded of McLuhan’s discussions of media and technologies creating both “amputations and extensions” to our senses and bodies, shaping them into a new technical form. McLuhan stated that “by continuously embracing technologies, we relate ourselves to them as servomechanism” (McLuhan, 2002, p. 235). It is this dependency and linkage to technology that makes it an integral part of our lives. As such, whether intentional or not, we become one large “bio-mechanical” system of sorts (McLuhan, 2002). This concept seems to be even more relevant to the concept of human-computer interactions and collaborations. With further emergence of artificial intelligence, humankind will be positioned to be receiving feedback from non-human systems rooted in big data. It is up to us to ensure that this feedback, which will be shaping our behaviors and learning, is both humane and moving in positive and unbiased directions.
I think that it is important for RTD members to stay open to new methodologies, while at the same time, to remain critical in examining the potentials and drawbacks of those different methodologies. In the past several years, I have incorporated eye-tracking and electroencephalogram (EEG) in mixed method studies. The goal is to examine creativity and learning using different kinds of data. Recently, I have ventured into learning analytics. I am grateful to be a participant of a year-long professional development program, the Learning Analytics in STEM Education Research (LASER) Institute funded by NSF. I feel rejuvenated to be a student again learning to leverage new data sources and apply computational methods (e.g., network analysis, text mining, and machine learning) to support my research and to develop newer lines of inquiry.
Currently, I am working on wrapping up a book on Ethics and Educational Technology, co-authored with Heather Tillberg-Webb, that is a culmination of nearly 20 years of work on the intersection of ethics and design. This is actually the first book ever written on this topic in our field.
Originally, I studied English with a focus on rhetorical analysis and critical theory. As I studied both classical Greek rhetoric pieces like Phaedrus, where the ethics of a new techne in the academy – writing – is hotly debated, and new (at the time) work on hypertext and the influence of it on meaning making, I became more interested in ethics and technology and how we engage more critically with technology. So I switched fields of study. Initially, I was most interested in ethics from a systems and change perspective, examining systemic effects of the introduction of new technologies into learning and workplace contexts. That grounding in systems theory and change literature persists in my work today, but over time I grew particularly interested not only in how we critically analyze issues but in how we translate that analysis into design decisions. One of the central themes in our book is how design functions as a theory of action in our field where ethics become something we do, not just a set of standards or code that some committee manages.
Over the years, in studying ethics more deeply, I was struck by how many of the early authors in our field mentioned ethical concerns over the “worthwhileness” of educational technology, such as Davies, Finn, and Kaufman (all reprinted in Ely & Plomp, 1996). Davies (1996) argued that educational technology “forces us to reflect on the morality of what we are about, by its very insistence on defensible choices” (p.15). Technology philosopher Feenberg (2001) puts the question to our field pointedly: “whither educational technology,” or to what end. These early authors all called for contemplation and interrogation of technology in our practices alongside the acts of design, development, and implementation. However, none of our instructional design models or processes integrate ethical considerations or reflection meaningfully into the design process. Chasing the ideas of these early authors, I began to explore Schön’s work, The Reflective Practitioner, and his concept of reflection-in-action became yet another theme in my work.
My work on ethics thus draws on design research, philosophy of technology, and reflective practice as three legs to the stool upon which ethics in practice sits. (Systems theory and change remain deeply embedded in my treatment of ethics as well.) Inspired by other fields where ethics are seen as part of practice and designers aim to “do good” (not just “do no harm”), our book aims to resituate ethics in our field as a form of reflection-in-action that is embedded in our professional design practices. Doing so requires examination of our views of technology and how those influence different ethical perspectives to explore where and how our philosophies and values influence our designs, which impact learners and systems. Our book will be off to the press soon with Routledge and should be available sometime in 2022. In the book, we have covered the topics I mentioned, along with ideas for how and where ethics can be integrated into planning, evaluation, design, development, and technology selection. We also developed “Ethics in Practice” sections where we explore ethical problems that arise in practice, like accessibility and data rights and privacy, with resources and activities to help designers address the ethical issues that are indeed very practical in nature.