Home About DANs Projects Past Activity Team Members Contact
About DANs and the Ambitions of the DAN Research Group
Dynamic Associative Networks (DANs) are neuromorphic models that exhibit elementary cognitive abilities despite their surprising simplicity. Unlike conventional artificial neural networks (ANNs), DANs adapt to their input (or learn) with one exposure to training data by altering their wiring (that is, by adding and removing nodes and connections) rather than by setting weights using an exogenous training procedure such as backpropagation. The decision for when and how to alter their wiring is determined by local, unsupervised learning procedures, the addition of nodes and connections most often based on a simplified Hebbian learning rule and their removal based on redundancy, inactivity over time, or by auto-calibration.
Because the structure of DAN networks are comprehensible--we know why they are as they are--and the justification for each and every node and connection clear, DANs afford a kind of transparency not possible with conventional ANNs. They avoid black boxes. In so doing, they also allow networks access to information about their own internal states which can, in turn, be exploited for meaningful self-adjustment in the application of metacognition. These are advanced and important affordances emerging from very simple mechanisms, and they offer promise to our research program.
Our only materials are nodes and connections along with a small set of elementary equations (nothing that isn't taught in elementary schools) to gate information through a network. These serve as building blocks in world models for more complex components that can be assembled into larger structures to perform increasingly rich cognitive tasks. Nonetheless, regardless of task, DANs are, at bottom, partially-connected directed graphs that regulate information flow to produce cognitive ability. As such, they are perhaps better thought of as circuits rather than information processors, their primary function being to get the right signal to the right place at the right time.
To date, using DAN components and principles, we have developed toy models for attention and content-addressable memory, anticipating directionality, object identification, comparison of similarity and difference, invariant shape detection, multi-modal association, emergent taxonomic classification, primary sequential memory, regulation of network to network information flow, eight-bit register control, and rudimentary natural language facility. Non-toy models have been used for automatic document classification and prediction in stochastic contexts, and to explore metacognition.
To be clear, this work is exploratory and empirical. For this exercise, we are not interested in computational efficiency, having evidential reason to suspect that DANs can ultimately be compressed into structures that are more computationally efficient. Our goal, rather, is to architect networks that maximize cognition using the fewest number of principles in the clearest way possible in order to understand better how cognition might arise from simple circuitry.
With this in mind, DANs should not be compared to GPTs and other large AI models; they are best considered as fundamental models that might lead to larger and more exciting implementations in the way that the neuromorphic models of the 1980's underlie the large and spectacular achievements of AI today. Simultaneously, we therefore hope to contribute to the study of how and why network cognition might emerge in natural and artificial systems.
The potential payoff for further study of DANs could not be greater, and early success recommends pushing this research until it can be pushed no further.
Because they learn in one shot and exhibit metacognition, future DANs will be able to learn continuously "in the wild", correcting themselves as they seek a coherent mapping between their existing internal states and new information from their environment.
Because they have metacognition, they can signal the user when they land on ambiguous patterns rather than merely guessing and can query the user (or the environment) for specific information needed to disambiguate patterns.
Because they are explanatory and transparent, we can be assured a kind of reliability not available in current models.
Because they grow as needed, DANs are not susceptible to catastrophic forgetting. They are also immune to adversarial attack.
Finally, because they avoid the training cost associated with existing models, they could lead to significant savings in energy consumption.
Of course, we're also philosophers, scientists, and possible paradigm changers. So, in addition to building models ourselves, or building models that build models themselves, our research also includes serious study of the many theoretical implications raised by what we learn. Typically, this research examines how DANs re-arrange fundamental questions concerning phenomenology, both in the formal, methodological sense and in the sense of an agent's qualitative experience. We also address what DANs might contribute to solving a "content problem" in dynamic systems theory, ecological psychology, and enactivism; that is, we seek to show how we might eliminate representations (specifically in the sense belong to the "Representational Theory of Mind") and yet model solutions for what Andy Clark identifies as "representation-hungry" problems.
Copyright © 2025. The DAN Research Group. All Rights Reserved.