LTH-image

Collective computation and learning in nonlinear networks

Jean-Jacques Slotine, MIT

Abstract:

The human brain grossly outperforms robotic algorithms in most tasks, using computational elements 7 orders of magnitude slower than their artificial counterparts. Similarly, current  large scale machine learning algorithms require million of examples and close proximity to power plants, compared to the brain's few examples and 20W consumption.  We show that nonlinear systems tools, such as contraction analysis and virtual dynamical systems, yield simple but highly non-intuitive insights about collective computation in networks, and in particular the role of sparsity, and that they also suggest systematic mechanisms to build progressively more refined networks and novel algorithms through stable accumulation of functional building blocks and motifs.  We discuss specifically contraction analysis of networks of natural gradient learners, asynchronous distributed adaptation, multiple time-scale primal-dual optimization, and linearization-free slam.

Presentation Slides

Video from Presentation