Vectors of Cognitive AI: Self-Organization

Tue, Jan 22, 2022, 9-11 am PST

Chat log.

YouTube link to panel recording

Panelists: Prof. Christoph von der Malsburg, Prof. György Buzsáki, Prof. Dave Ackley, Dr. Joscha Bach.

Biological and social agents are very different from our present approaches to technologically designed artificial agents. Technological systems are constructed “from outside in”: they extend a world with known, reliable functionality by forging a deterministic substrate into additional, required functions. This is true whether we are building a bicycle in a workshop or a learning algorithm in a software development environment. In contrast, biological systems (such as plants, or the mind of a human being)  are growing “from inside out”: they organize an indeterministic substrate with unreliable properties into a structure that converges to serving the required function, and which will even self-heal and regrow when they are being damaged or disturbed. What can technological systems (and especially AI) learn from the self-organization of biology? What basic principles drive self-organization, and how do they lead to efficient, robust and adaptive implementations of intelligent information processing? How can we formally describe self-organizing systems in a computational context?

In this panel, we discuss perspectives on self-organization in the context of AI, neuroscience and general computation.

Program

Christoph Von Der Malsburg

Abstract:

Although the connectivity of the brain needs a petabyte to be
described, it is generated on the basis of mere gigabytes of genetic and training data.  The process responsible for its generation, network self-organization (NWSO), has been studied intensively in neuroscience. The connectivity of the brain can be seen as an overlay of net fragments, each of which generated and dynamically stabilized by NWSO. These net fragments form the building elements of the brain's cognitive architecture.  Important specific net structures are topological textures and homeomorphic mappings between such.  These serve, for instance, as basis for invariant object representation and recognition or, more generally, abstract schema recognition.

Speaker Bio:

Christoph von der Malsburg, PhD in physics Univ. of Heidelberg, served as research scientist at a Max Planck Institute in Göttingen, as Prof. of computer science neuroscience, physics and psychology, University of Southern California, as Director of the Institute of Neuroinformatics at Bochum University, is now Senior Fellow at the Frankfurt Institute for Advanced Studies and Visiting Professor at the Institute for Neuroinformatics ETH/UZH Zurich. He received many awards, among them the Pioneer Award of the Neural Network Council IEEE and the Hebb Award of the International Neural Network Society, is Fellow of the International Neural Network Society, is on the Scientific Advisory Board of the Human Brain Project and has founded two successful companies.

_________________________________________________

György Buzsáki

​​Preconfigured brain dynamics and an alternative AI
Communication in any system requires an ‘agreement’ (or cipher) between the sender and the receiver. Messages are discretized (such as words in language) and separated by agreed symbols (e.g., punctuations). In the brain, such discretization is supported by the numerous network rhythms because most rhythms involve inhibition and inhibition is a natural punctuation/separation mechanism. Message packaging and their deciphering in the brain evolve not only in time but in neuronal space as well, due mainly to the slow propagation of spikes from structure to structure. These rhythm-based syntactical operations are well-preserved over the course of evolution and provide the basis not only for communication among brain systems but also between brains, such as human language. Rhythm-supported communication mechanisms are preconfigured and, in principle, allow for generating infinite numbers of messages from preexisting cell assembly sequences. Thus, an alternative to acquiring everything from scratch and increasing the complexity of neuronal dynamics with cumulative new learning, ‘matching’ between preexisting ‘lego-like’ neuronal sequence patterns and experience may be a better option for neuronal learning. AI based on similar preconfigured dynamic may yield alternative solutions to current limitations of artificial networks.

Speaker Bio:

Many concepts in modern neuroscience can be traced back to Buzsáki. His work has contributed to the emerging understanding of the dynamics of hippocampal system and the recognition of the importance of temporal firing properties in the formation of neural codes. Buzsáki identified a hierarchical organization of brain oscillations and uncovered their mechanisms systematically. He developed a conceptual framework to understand the fundamental synaptic mechanisms underlying theta, gamma, and sharp-wave ripple oscillations. His overarching hypothesis is that the numerous rhythms that the brain perpetually generates are responsible for segmentation of neural information and communication across brain regions. He proposed how these rhythms support a 'brain syntax’, a physiological basis of cognitive operations. Buzsáki’s work changed how we think about information encoding in the healthy and diseased brain, such as epilepsy and psychiatric diseases. His most influential work is known as the two-stage model of memory trace consolidation, with hippocampal sharp wave ripples serving as a transfer mechanism from hippocampus to neocortex. Several laboratories worldwide have adopted his framework and provided supporting evidence for the two-stage model of memory in both experimental animals and human subjects. Over the years, the ‘ripple’ pattern has become a quantifiable biomarker of cognition. Relevant to clinical translation, hippocampal ripples, along with other brain rhythms that his laboratory has identified, lend themselves to diagnosis of disease and drug discovery.

 

Throughout his career, Buzsáki has been a strong advocate for studying the intact brain in its natural state, a view that has been widely adopted and transformed the way neuroscience is done today. His characteristic research style combines mathematical modeling with skilled multidisciplinary experimental design that includes electrophysiology, morphology, optogenetics, and behavioral analysis in the awake rodent. Buzsáki is known as an innovative and generous inventor of new technologies to probe brain activity and is at the forefront of the development of an open-access framework. He is among the top 0.2 % most-cited neuroscientists (Web of Science).

_________________________________________________

Dave Ackley

Robust-first computation: How to stop eating the glass sandwich

Efficiency and robustness are at odds over redundancy, which robustness requires, but efficiency eliminates. Traditional digital computer architecture, based on central control and deterministic execution, emphasizes correctness and efficiency, with robustness added as needed for observed failures. The result is a glass sandwich, with rickety skyscrapers of inherently fragile software placed above the electronics and below the data center. Inspired by living systems principles, robust-first computation provides an alternate approach, striving for self-organization and agency at all scales.

Speaker Bio:

David Ackley is Emeritus Professor of Computer Science at University of New Mexico. David received a Ph.D. from Carnegie Mellon University. Before starting his academic position at the University of New Mexico, he was a member of the Cognitive Science Research Group at Bellcore. His ongoing research interests center on artificial life models and real artificial life; current research emphases include genetic algorithms and programming, distributed and social computing, robust self-aware systems, and computer security.

_________________________________________________

Joscha Bach

Abstract:

The first wave of AI concerned itself with implementing problem solving functionality, while present end-to-end learning systems are adapting an initial architecture to discovering the problem solving functionality by itself. If a third wave of AI systems is to discover its own architecture and algorithms, we may have to think about how systems may assemble themselves from more basic parts, what properties these parts need to have, and what relationships exist between self-organizing components, their substrate, and the emergent functionality of intelligent systems. I suggest that at the core of this understanding lie notions of distributed agency, credit assignment, reward distribution and extending-loop control.

Speaker Bio:

Joscha Bach, PhD, is a cognitive scientist and AI researcher with a focus on computational models of cognition and neuro-symbolic AI. He has taught and worked in AI research at Humboldt University of Berlin, the Institute for Cognitive Science in Osnabrück, the MIT media lab, the Harvard Program for Evolutionary Dynamics and is currently a principal AI researcher at Intel Labs, California.