Cliff Young is a software engineer in Google Research, where he works on codesign for deep learning accelerators. He is one of the designers of Google’s Tensor Processing Unit (TPU) and one of the founders of the MLPerf benchmark. Previously, Cliff built special-purpose supercomputers for molecular dynamics at D. E. Shaw Research and was a Member of Technical Staff at Bell Labs. Cliff holds AB, MS, and PhD degrees in computer science from Harvard University. Cliff is a member of ACM and IEEE.
Sara Hooker is a researcher at Google Brain working on training models that fulfill multiple desiderata. Her main research interests gravitate towards interpretability, model compression and security. In 2014, she founded Delta Analytics, a non-profit dedicated to bringing technical capacity to help non-profits across the world use machine learning for good.
Gintare Karolina Dziugaite
Gintare Karolina Dziugaite is a Lead Research Scientist at Element AI, a ServiceNow company. She is also an associate member at Mila, the Quebec AI Institute. Her research combines theoretical and empirical approaches to understanding deep learning, with a focus on generalization and network compression. Before joining Element AI, she obtained her Ph.D. in machine learning from the University of Cambridge, under the supervision of Zoubin Ghahramani. Prior to that, she studied Mathematics at the University of Warwick and read Part III in Mathematics at the University of Cambridge, receiving a Masters of Advanced Study (MASt) in Applied Mathematics. In 2020, Karolina was a member of the Institute for Advanced Study,, participating in the special year on Optimization, Statistics, and Theoretical Machine Learning. In 2019, she was a Simons Fellow during the Foundations of Deep Learning program at the Simons Institute for the Theory of Computing at the University of Berkeley from programs. She was also a long-term participant at the Simons Institute in 2017 and 2020 during programs on theoretical machine learning and interpretable machine learning.
Selima Curci is an Algorithm Researcher working alongside the research team of a healthcare start-up in the Netherlands, focused on developing a unique multisensor solution for 24-hour remote monitoring of risk groups. Selima recently graduated from the Eindhoven University of Technology with a Master's Degree in Data Science. During the master's, she specializes in deep learning and sparse neural networks with a thesis project entitled "Truly Sparse Neural Network at Scale". In her free time, Selima likes to hike, camping and cooking.
Rosanne Liu is a research scientist in Google Brain, and co-founder and executive director of ML Collective, a nonprofit organization for open collaboration and accessible mentorship. Before that she was a founding member of Uber AI. She has published research at NeurIPS, ICLR, ICML, Science and other top venues, and her work has been covered by WIRED, MIT Tech Review, Fortune and others. She obtained her PhD in Computer Science at Northwestern University; while at school she used neural networks to help discover novel materials, and to optimize fuel efficiency in hybrid vehicles. Outside of research, she supports underrepresented communities, and organizes symposiums, workshops, and a weekly reading group “Deep Learning: Classics and Trends” since 2018. She serves as the Diversity Equity & Inclusion co-chair of ICLR 2022.
Paulius Micikevicius is a Director in the Compute Architecture and Applied Deep Learning Research groups at NVIDIA. He joined NVIDIA in 2007, prior to which he as an assistant professor of computer science at Armstrong Atlantic State University. Paulius holds a PhD in computer science from University of Central Florida.
Torsten is a Professor of Computer Science at ETH Zürich, Switzerland. He is also a key member of the Message Passing Interface (MPI) Forum where he chairs the "Collective Operations and Topologies" working group. His research interests revolve around the central topic of "Performance-centric System Design" and include scalable networks, parallel programming techniques, and performance modeling. Torsten won best paper awards at the ACM/IEEE Supercomputing Conference SC10, SC13, SC14, SC19, EuroMPI'13, HPDC'15, HPDC'16, IPDPS'15, and other conferences. He published numerous peer-reviewed scientific conference and journal articles and authored chapters of the MPI-2.2 and MPI-3.0 standards. He received the Gordon Bell Prize, the Latsis prize of ETH Zurich, as well as ERC starting and consolidator grants. Additional information about Torsten can be found on his homepage at htor.inf.ethz.ch.
Anna Golubeva is a research fellow at IAIFI, Boston, working on the intersection between Deep Learning and Theoretical Physics. She obtained her PhD in 2021 at the Perimeter Institute for Theoretical Physics and the University of Waterloo advised by Roger Melko. Her main research focus is on developing a theory of DL using approaches from Theoretical Physics. Her goal is to contribute towards understanding the tools of AI and leveraging them to advance both AI and the physical sciences. Her projects include both the application of DL methods for quantum many-body problems, as well as a theory-based analysis of deep learning systems, exploiting approaches from information theory, statistical learning theory, and statistical physics.
Friedemann Zenke is a junior group leader in computational neuroscience at the Friedrich Miescher Institute for Biomedical Research (FMI) in Basel, Switzerland. He is broadly interested in information processing in biological neural networks, focusing on learning and plasticity, continual learning, and spiking neural networks. His group combines both analytical and computational approaches from computational neuroscience, neuromorphic engineering, and machine learning. Friedemann studied physics at the University of Bonn, Germany, and the Australian National University in Canberra, Australia, initially focusing on experimental hadron physics. He then shifted to computational neuroscience for his Ph.D. with Wulfram Gerstner at the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, where he worked on the theory of synaptic and homeostatic plasticity in spiking neural networks. Subsequently, Friedemann joined Surya Ganguli’s group at Stanford as a post-doc to study the role of complex synaptic dynamics as a remedy for catastrophic forgetting in deep neural networks. Later he moved to the University of Oxford as a Sir Henry Wellcome fellow, where he worked with Tim Vogels. During this time, he further developed functionally inspired learning rules for spiking neural networks.
Natalia Vassilieva is Director of Product, Machine Learning at Cerebras Systems, where she leads market, application, and algorithm analysis for ML use cases. Most recently before joining Cerebras she was a Senior Research Manager at Hewlett Packard Labs, where she led the Software and AI group, worked on performance characterization and modelling of deep learning workloads, fast Monte Carlo simulations, and systems software, programming paradigms, algorithms and applications for the HP memory-driven computing project. Natalia served as the head of HP Labs Russia in 2011-2015, was an associate professor at Saint Petersburg State University and a lecturer at the Saint Petersburg Computer Science Center. She holds a PhD in mathematics, computer science, and information technology from Saint Petersburg State University.
Mitchell is a PhD student at the University of Washington interested in understanding and improving neural networks. Recent research has focused on neural network sub-networks, sub-spaces, and robustness. Mitchell is also a member of ML Collective, and previously worked at the Allen Institute for Artificial Intelligence.