BEGIN:VCALENDAR VERSION:2.0 PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4// BEGIN:VEVENT UID:20250725T193807EDT-3671iz7Lr7@132.216.98.100 DTSTAMP:20250725T233807Z DESCRIPTION:\n ISS Informal Systems Seminar\n Speaker: Petar Veličković – Sta ff Research Scientist\, DeepMind\, United Kingdom \n \n\n\n \n \n\n Présentat ion sur YouTube\n\n The last decade has witnessed an experimental revolutio n in data science and machine learning\, epitomised by deep learning metho ds. Indeed\, many high-dimensional learning tasks previously thought to be beyond reach –such as computer vision\, playing Go\, or protein folding – are in fact feasible with appropriate computational scale. Remarkably\, t he essence of deep learning is built from two simple algorithmic principle s: first\, the notion of representation or feature learning\, whereby adap ted\, often hierarchical\, features capture the appropriate notion of regu larity for each task\, and second\, learning by local gradient-descent typ e methods\, typically implemented as backpropagation.\n\n While learning ge neric functions in high dimensions is a cursed estimation problem\, most t asks of interest are not generic\, and come with essential pre-defined reg ularities arising from the underlying low-dimensionality and structure of the physical world. This talk is concerned with exposing these regularitie s through unified geometric principles that can be applied throughout a wi de spectrum of applications.\n\n Such a ‘geometric unification’ endeavour i n the spirit of Felix Klein's Erlangen Program serves a dual purpose: on o ne hand\, it provides a common mathematical framework to study the most su ccessful neural network architectures\, such as CNNs\, RNNs\, GNNs\, and T ransformers. On the other hand\, it gives a constructive procedure to inco rporate prior physical knowledge into neural architectures and provide pri ncipled way to build future architectures yet to be invented.\n\n \n Biograp hy: Dr. Veličković is a Staff Research Scientist at DeepMind\, Affiliated Lecturer at the University of Cambridge\, and an Associate of Clare Hall\, Cambridge. He holds a PhD in Computer Science from the University of Camb ridge (Trinity College)\, obtained under the supervision of Pietro Liò. Hi s research concerns geometric deep learning—devising neural network archit ectures that respect the invariances and symmetries in data. For his contr ibutions\, he is recognised as an ELLIS Scholar in the Geometric Deep Lear ning Program. Particularly\, he focuses on graph representation learning a nd its applications in algorithmic reasoning (featured in VentureBeat). He is the first author of Graph Attention Networks—a popular convolutional l ayer for graphs—and Deep Graph Infomax—a popular self-supervised learning pipeline for graphs (featured in ZDNet). His research has been used in sub stantially improving travel-time predictions in Google Maps (featured in t he CNBC\, Endgadget\, VentureBeat\, CNET\, the Verge and ZDNet)\, and guid ing intuition of mathematicians towards new top-tier theorems and conjectu res (featured in Nature\, Science\, Quanta Magazine\, New Scientist\, The Independent\, Sky News\, The Sunday Times\, la Repubblica and The Conversa tion). See homepage.\n\n DTSTART:20220916T180000Z DTEND:20220916T190000Z LOCATION:CA\, ZOOM SUMMARY:Geometric Deep Learning URL:/cim/channels/event/geometric-deep-learning-351686 END:VEVENT END:VCALENDAR