Panel on Representational Paradigms for Cognitive AI

Oct 21, 2021, 9-11 am PDT

YouTube link to panel recording

There is a wide gap between current machine learning representations and the way in which our minds represent reality. Our mental representations are dynamic, coherent, unified (in the sense that we establish relationships between all our domains of knowledge, in the context of a global universe), and they are updated on the fly. In this panel, we bring some important thinkers and practitioners of cognitive science, robotics, AI and philosophy together to discuss representations for future generations of AI systems.

This is the first in a series of events on Cognitive Artificial Intelligence. The goal of Cognitive AI is to build and understand systems that can make sense of their environment, combine knowledge and perception, learn to act on domains they have not encountered before, make autonomous decisions and explain them, interact deeply with people and human society.

Program

Mark Bickhard

Mark Bickhard is the Henry R. LuceProfessor in Cognitive Robotics and the Philosophy of Knowledge at LehighUniversity, and is affiliated with the Departments of Philosophy and Psychology. His work ranges from process metaphysics and emergence to consciousness, cognition, language, and functional models of brain processes, to persons and social ontologies. Bickhard’s work on cognition features a model of cognition as emergent in agent processes for interacting with the world.

Cognition and Truth Value

I’m interested in a metaphysical problem: what is representing — correspondence of some sort (e.g., informational, causal, lawful, …)?  I propose a model of emergent truth value, rather than correspondence.  Truth value has always been the fundamental problem for correspondence — it has to be correct correspondence, and how can the organism determine if it is correct or not? — so I propose to take truth value per se as the fundamental criterion.

The model and the issue have both theoretical and practical consequences: for example, without organism detectable error (detect truth value of ‘false’), there can be no error guided behavior or learning.  So, organism detectable error (at least in principle) has to have emerged.  The core proposal is that normative anticipation of internal functional processes can be true or false, and can be (in principle) functionally detectable.

Stephen Grossberg

Stephen Grossberg is Wang Professor of Cognitive and Neural Systems; Director of the Center for Adaptive Systems; and Emeritus Professor of Mathematics andStatistics, Psychological and Brain Sciences, and Biomedical Engineering at Boston University. He is a principal founder and current research leader of the fields of computational neuroscience, theoretical psychology and cognitive science, and biologically-inspired engineering, technology, and AI. In 1957-1958, he introduced the paradigm of using systems of nonlinear differential equations to develop neural network models that link brain mechanisms to mental functions, including widely used equations for short-term memory (STM), or neuronal activation; medium-term memory (MTM), or activity-dependent habituation; and long-term memory (LTM), or neuronal learning. His work focuses upon how individuals, algorithms, or machines adapt autonomously in real-time to unexpected environmental challenges. These discoveries together provide a blueprint for developing autonomous adaptive intelligence.

How Each Brain Makes a Mind: From Brain Resonances to Conscious Experiences

This talk will describe some of the themes that are summarized in my new book entitled Conscious MIND, Resonant BRAIN: How Each Brain Makes a MindThe book was written to be self-contained and non-technical in a conversational style as a series of stories. It explains how brains make minds starting with perception, then moving on to cognition, emotion, and action, in both healthy individuals and clinical patients. In particular, the book describes the most advanced cognitive and neural theory of how our brains learn to attend, recognize, and predict objects and events in a changing world. All the foundational hypotheses of this Adaptive Resonance Theory, or ART, have also been supported by subsequent psychological and neurobiological experiments. ART has also provided principled and unifying explanations of hundreds of additional experiments. ART shows how humans can learn quickly without experiencing catastrophic forgetting. ART hereby provides a solution of the stability-plasticity dilemma. ART dynamics also clarify how, where in our brains, and why evolution created conscious states of seeing, hearing, feeling, and knowing, and how these conscious states enable planning and action to realize valued goals. ART can be derived from a thought experiment about how any system can autonomously correct predictive errors in a changing world. During this derivation, the words mind and brain are never mentioned. ART is thus a universal solution of a general problem about how autonomous adaptive intelligence is achieved. Due to such properties of universality, the book can explain how biological neural network models such as ART provide a blueprint for autonomous adaptive intelligence in applications to engineering, technology, and AI.


Yulia Sandamirskaya

Dr. Yulia Sandamirskaya leads the Applications Research team of the Neuromorphic Computing Lab at Intel. Her team in Munich develops spiking neuronal network based algorithms for neuromorphic hardware to demonstrate the potential of neuromorphic computing in real-world applications. She has 15 years of research experience in the fields of neural dynamics, embodied cognition, and autonomous robotics.  She led a research group “Neuromorphic Cognitive Robots” at the Institute of Neuroinformatics of the University of Zurich and ETH Zurich, Switzerland and the “Autonomous learning” group at the Institute for Neural Computation at the Ruhr-University Bochum (RUB), Germany. She has chaired the European Society for Cognitive Systems and coordinated the network action NEUROTECH, supporting the neuromorphic research community in Europe.

Memory, intentionality, and autonomy enabled by neuronal attractor dynamics

Neuronal attractor dynamics allow us to build neuronal networks that form stabilized activity patterns. These patterns help us to integrate sensory inputs that arrive from different sensors with varying temporal sampling, as well as to stabilize control signals to bridge different behavioral time scales. Attractor networks can be used to build intentional neuronal units, which create representations about external to the neural system entities: perceived objects and movement goals. We can learn relations between such intentional units in interaction with the environment and thus demonstrate a cognitive architecture capable to autonomously behave and learn. I will argue that such architectures will enable AI for smart and useful robots.


Jerome Busemeyer

Jerome Busemeyer previously was a Full Professor at Purdue University before 1997, and now is Distinguished Professor in Psychological and Brain Sciences, Cognitive Science, and Statistics at Indiana University-Bloomington. His research has been funded by the National Science Foundation, and the National Institute of Mental Health, and he served on grant review panels for these agencies. He was the Manager of the Cognition and Decision Program at the Air Force Office of Scientific Research in 2005-2007. He has published five books in decision and cognition, and over 100 journal articles across disciplines. He served as the Chief Editor of Journal of Mathematical Psychology, Associate Editor of Psychological Review, and he was the founding Chief Editor of Decision. He is a fellow of the Society of Experimental Psychologists and he won the prestigious Warren medal from that society in 2015. He became a fellow of the Cognitive Science Society and a fellow of the American Academy of Arts and Sciences in 2017. During his early career, he became well known for the development of a dynamic and stochastic model of human decision making called decision field theory. Later, he was one of the pioneers to develop a new approach to cognition based on principles from quantum theory.  In 2012, Cambridge University Press published his book with Peter Bruza introducing this new theory applying quantum probability to model human judgment and decision-making.

Modeling cognition and decision using quantum probability theory

What type of probability theory best describes the way humans make judgments under uncertainty and deci- sions under conflict? Although rational models of cognition have become prominent and have achieved much success, they adhere to the laws of classical probability theory despite the fact that human reasoning does not always conform to these laws. For this reason we have seen the recent emergence of models based on an alternative probabilistic framework drawn from quantum theory. These quantum models show promise in addressing cognitive phenomena that have proven recalcitrant to modeling by means of classical probability theory. This review compares and contrasts probabilistic mod- els based on Bayesian or classical versus quantum principles, and highlights the advantages and disadvantages of each approach.



Steven Rogers

Dr. Steven K. Rogers is the Senior Scientist for Autonomy, Air Force Research Laboratory, Wright-Patterson AFB, Ohio. He leads the AFRL Autonomy Capability Team in the rapid advancement of autonomy R&D. His personal research has focused on QUalia Exploitation of Sensing Technology, how to build autonomous systems by replicating the engineering characteristics of consciousness.  After retiring from active duty in the Air Force, Dr. Rogers founded a company for developing practical applications of advanced information processing techniques for medical products.  The company invented the world’s most accurate computer aided detection system for breast cancer.  He has over 150 technical publications and more than 20 patents.

What are the tenets for machine representations (artificial qualia?) that enable flexible behaviors?

Representations are how agents structure their knowledge.  Knowledge is what an agent uses to create meaning.  Understanding is associated with the usefulness of the created meaning to accomplishing a task.  A deeper meaning is required to enable an agent to respond acceptably to a range of tasks and flexible behaviors.  The big challenge for artificial intelligence / machine learning (AI/ML) is creating machine representations that enable peer, task, and cognitive flexibilities.  Nature’s solution to flexible behaviors is consciousness.  The future of AI/ML will answer the key challenges of what is consciousness?  Where does consciousness come from? What sort of agents have consciousness?  What is intelligence and how does it relate to consciousness?  AI/ML will lead to new insights into conscious representations in nature, a Theory of Consciousness.  That theory will lead to new machine representations that will lead to human-machine teams of agents that can solve the really important problems we face.

Joscha Bach

Joscha Bach, PhD, is a cognitive scientist and AI researcher with a focus on computational models of cognition and neuro-symbolic AI. He has taught and worked in AI research at Humboldt University of Berlin, the Institute for Cognitive Science in Osnabrück, the MIT media lab, the Harvard Program for Evolutionary Dynamics and is currently a principal AI researcher at Intel Labs, California.

Perception, Reflection and Coherence

Mental representations require fulfilling a number of conditions that are not met by all common paradigms in AI. The representations for cognitive systems have to be universal (deal with all kinds of perceptual and abstract structure), unified (all representational domains are related into a common global context: the universe of meaning), executable (include active operators that can change other representations) and minimize constraint violations. Perceptual representations are usually dynamic, real-time, geometric or scalar and use a fixed architectural hierarchy. Conceptual/analytic representations are usually discrete, compositional, low dimensional, and use a flexible hierarchy. Understanding the interaction between perceptual and analytic representations is crucial for Cognitive AI.