Learning algorithms inspired by the brain rely on local calculations, that stem from synaptic and neural dynamics. These methods are more suited for physical artificial neural networks than the backpropagation of errors, commonly used in software networks to find gradients. However, they face challenges in achieving high accuracy on complex tasks.
We are devising new learning algorithms aiming to combine the precision of backpropagation with the hardware compatibility of brain-inspired techniques. These algorithms leverage physical phenomena, such as relaxation towards a minimum energy state, or transient dynamics to recognize and classify data, either supervised or unsupervised.
The hardware neural networks we develop, based on spintronics, oxides, or quantum mechanics, serve as natural platforms to implement these algorithms through the physics and dynamics of the components. While maintaining algorithmic precision for complex tasks is challenging, given the noisy and varied characteristics of nano-components, we believe it’s achievable: after all, the brain does it. Inspired by this biological feat, we seek solutions that combine high performance, fault resilience, and low energy consumption.

Neuromorphic Algorithms


All our codes are in open access on our team github.
Key results and publications:
Self-Contrastive Forward-Forward Algorithm (SCFF): The Forward-Forward (FF) algorithm is promising for neuromorphic implementations as it offers a purely feedforward and local approach to optimizing neural networks, avoiding the need for traditional backpropagation. However, FF struggles to achieve state-of-the-art performance due to challenges in generating reliable negative data for unsupervised learning. To address this, we introduce the Self-Contrastive Forward-Forward (SCFF) algorithm. We use self-supervised contrastive learning techniques to largely improve FF performance across various datasets, including MNIST, CIFAR-10, and STL-10. SCFF extends FF to recurrent neural networks, making it suitable for sequential tasks and enabling efficient real-time learning on resource-constrained edge devices. [arXiv]
Equilibrium Propagation (EP): Equilibrium propagation harnesses energy minimization in physical systems to compute gradients that closely approximate those of backpropagation through time (NeurIPs 2019). We have developped a spiking version of Equilibrium propagation called Eqspike that enables self-learning: each synapse is directly programmed by spikes from the two neurons that it connects (iScience 2021). We have developped another version with binary synapses and neurons (CVPR 2021). We used this binary version to train the D-Wave hardware Ising Machines to recognize MNIST handwritten digits through forward and reverse annealing (Nature Communications 2024)
Unsupervised end-to-end training with a self-defined target: Designing AI algorithms for edge devices that can learn from both labeled and unlabeled data is challenging due to hardware constraints. While deep end-to-end training is accurate, self-supervised learning demands excessive computational resources, making it unsuitable for embedded systems. In contrast, unsupervised layer-wise training is hardware-friendly but does not integrate well with supervised learning. To bridge this gap, we introduce winner-take-all selectivity and homeostasis regularization in the output layer, enabling high-performance unsupervised learning within networks originally designed for supervised training. This approach achieves strong results on datasets like MNIST, Fashion-MNIST, and SVHN and extends to semi-supervised learning, reaching 96.6% accuracy on MNIST with only 600 labeled samples. Our findings suggest that this method allows AI models to flexibly adapt to different levels of labeled data availability while remaining computationally efficient (NCE 2024)