The hard sciences are the easy ones, and the soft sciences are the hard ones!
Dr. Jeff Shrager is Chief Scientists of Blue Dot Change, and Adjunct Professor in the Symbolic Systems Program at Stanford. He holds BSE, MSE, and PhD degrees in Computer Science, from The University of Pennsylvania, and Cognitive and Developmental Neuropsychology, from Carnegie Mellon University. He has conducted post doctoral work in Cognitive Neuroscience and Functional MRI at the University of Pittsburgh Learning Research and Development Center, and Marine Environmental Microbial Molecular Biology at the Carnegie Inst. of Washington, Dept. of Plant Biology at Stanford. His work spans Artificial Intelligence and the Cognitive Sciences, including cognitive and developmental neuroscience, formal and informal science and math education, scientific computing, human learning and brain development, artificial intelligence, machine learning, molecular, microbial, and marine biology and genomics, bioinformatics, environmental biochemistry, and discrete mathematics, and computational simulation of a wide range of complex systems. Dr. Shrager's current applied work focuses on how we can oxidize (and thus remove) methane from the atmosphere. In his academic work he studies how science works and how scientists think, and on building intelligent tools, agents, models, and platforms to support and improve scientific reasoning and other aspects of the scientific process. He has authored, or co-authored, over one hundred peer-reviewed papers, and three books, and has co-founded three successful AI-based biomedical companies, one in drug discovery robotics, and two in cancer informatics. Before joining Blue Dot Change, Dr. Shrager was co-founder, CTO, and Director of Engineering and Research of xCures, his third AI-based biomedical startup.
Extended research summary:
My work focuses primarily on how science works and how scientists think, and on building intelligent tools, agents, models, and infrastructure to support and improve scientific reasoning, and other aspects of the scientific process. Symbolic and sub-symbolic computation must co-operate to support flexible, robust learning and cognition. Symbolic-level "complex" learning and reasoning is often depicted by, for example, "book learning", the pinnacle of which is taken to be philosophical or scientific reasoning and discovery. Meanwhile, sub-symbolic "sensory-motor" or "perceptual" learning is depicted by, for example, learning to walk, cook, or drive. Through computational modeling, as well as laboratory and field research, I study how symbolic and the sub-symbolic/sensory-motor/perceptual computation co-operate in enabling flexible and robust learning and cognition in many areas. Especially interesting is early child development, during which period the brain is becoming organized, and the child is embedded in a rich cognitive and sensory-motor scaffold. My computational work in this area has produced several influential models of brain self-organization, as well as of how high level reason interacts with, and indeed relies upon, the sub-symbolic cognitive infrastructure, while, at the same time, the organization of the sub-symbolic sensory-motor systems is guided by higher level activity. Three projects stand out in triangulating my contributions in these areas: 1. My model (with David Klahr) of "instructionless learning" based on a process, called "commonsense perception", which combines symbolic and "perceptual" reasoning; 2. My model (with Mark Johnson) of cortical parcellation, which explains how the brain obtains its functional architecture, and which was a precursor to later "deep learning" architectures; and 3. My model (with Bob Sielger) of the development of arithmetic knowledge and strategic skill, which has been widely influential upon a generation of developmental modelers, as well as in educational science, and which my colleagues and I continue to evolve to encompass recent findings in Systems Neuroscience. I also apply my work to real-world, science, especially in biocomputing: I co-founded, and served as CTO and engineering lead for, two (slightly) successful scientific biocomputing companies, as well as having envisioned, created, and led the team that developed BioBike and BioDeducta, decade-long NASA and NSF-funded projects that built the world's first cloud-based "intelligent" scientific computing engine (a precursor to Wolfram Alpha). I have co-authored nearly a hundred peer-reviewed papers in areas such as machine learning, graph theory, developmental psychology, computational psychology, drug discovery, molecular biology, computational biology, privacy and computer security, and even in the philosophy of science.