Graph Element Networks

adaptive, structured learning and memory


Ferran Alet, Adarsh K. Jeewajee,

Maria Bauza, Alberto Rodriguez, Tomas Lozano-Perez, Leslie P. Kaelbling

Video summary

Prototype of Graph Element Networks as a generative memory

The video depicts the use of the GEN as an external memory. Views from different places inside 9 adjacent mazes are inserted into the GEN. We then query the GEN for the inferred view at new query coordinates, while rotating 360 degrees for each position. The red nodes are active nodes from which information is interpolated to generate a new view, for each query location.

Relation to other disciplines

We have already explored some of these connections in our work:

Finite Element Methods: we were inspired by this class of methods to create our solution. As such it is quite likely that we can bring more insights from this field into Machine Learning. Conversely, Graph Element Networks can learn to produce results with similar accuracy as Finite Element Methods but with faster computations and can be applied to situations where we do not have a clear model of the world.

Robotics: with a system capable of learning custom differentiable dynamic models of objects we could endow our robots of better predictive models with which to plan and control. We have now applied it to a real-world but still 2 dimensional dataset and a single object. We are now starting to work with multiple objects and plan to move to more complex objects, such as articulated objects and ropes, in future work.


There are some interesting connections that we have not yet explored:

Neuroscience; place cells and grid cells: our memory system creates a mesh around an environment and retrieves memories based on location. Place cells and grid cells, widely studied in the neuroscience literature help build representations of space by tessellating it in a similar way. It would be very interesting to see if neuroscience can inform better representation functions or grid connectivity functions for our algorithm.

Differential Geometry: the local representation function in Graph Element Networks depends on the local distance function in the metric space, a topic commonly studied in differential geometry. In the machine learning community, this is already being studied within the Geometric Deep Learning community.

Computer Vision and convolutional neural networks: convolutional neural networks are great at capturing the structure of Euclidean space. We generalize them in two ways: 1) our system is also applicable in non-Euclidean spaces and 2) we can run different versions of the network at different mesh resolutions, being able to trade compute with accuracy. This second aspect could be of special interest to make current vision systems more efficient by focusing on the right parts of the images and making them faster.


Finally, it is likely there are many great applications unknown to us, as many problems can be framed as learning a mapping from function to functions in a space.

Paper with technical details

During the challenge, we submitted a preliminary version of this work to ICML (International Conference on Machine Learning). We attach the manuscript with all the technical details. It does not include some parts, such as being able to optimize the node positions and being able to generate images in the memory experiment.

Graph_Element_Networks_ICML (5).pdf