Large Associative Memory Problem in Neurobiology and Machine Learning – Dmitry Krotov, PhD

Dmitry Krotov, PhD, is a research staff member at the MIT-IBM Watson AI Lab and IBM Research Center in Cambridge, MA. He received his PhD in Physics from Princeton University in 2014, and was a research staff member at the Institute for Advanced Study in Princeton, NJ. His research focuses on implementing ideas coming from neurobiology in modern AI systems, as well as the relationship between models of associative memory and neural networks used in deep learning.

Dense Associative Memories or modern Hopfield networks permit storage and reliable retrieval of an exponentially large (in the dimension of feature space) number of memories. At the same time, their naive implementation is non-biological, since it seemingly requires the existence of many-body synaptic junctions between the neurons. We show that these models are effective descriptions of a more microscopic (written in terms of biological degrees of freedom) theory that has additional (hidden) neurons and only requires two-body interactions between them. For this reason our proposed microscopic theory is a valid model of large associative memory with a degree of biological plausibility. The dynamics of our network and its reduced dimensional equivalent both minimize energy (Lyapunov) functions. When certain dynamical variables (hidden neurons) are integrated out from our microscopic theory, one can recover many of the models that were previously discussed in the literature, e.g. the model presented in ”Hopfield Networks is All You Need” paper. We also provide an alternative derivation of the energy function and the update rule proposed in the aforementioned paper and clarify the relationships between various models of this class.

IARAI | Institute of Advanced Research in Artificial Intelligence

Posted on 3 January, 2024

Leave a Reply

Your email address will not be published. Required fields are marked *