Federated learning (FL) combined with differential privacy (DP) offers machine learning (ML) training with distributed devices and with a formal privacy guarantee. With a large population of devices, FL with DP produces a performant model in a timely manner. However, for applications with a smaller population, not only does the model utility degrade as the DP noise is inversely proportional to population, but also the training latency increases since waiting for enough clients to become available from a smaller pool is slower. In this work, we thus propose expanding the population based on…Apple Machine Learning Research
The Role of Entropy and Reconstruction for Multi-View Self-Supervised Learning
The mechanisms behind the success of multi-view self-supervised learning (MVSSL) are not yet fully understood. Contrastive MVSSL methods have been studied though the lens of InfoNCE, a lower bound of the Mutual Information (MI). However, the relation between other MVSSL methods and MI remains unclear. We consider a different lower bound on the MI consisting of an entropy and a reconstruction term (ER), and analyze the main MVSSL families through its lens. Through this ER bound, we show that clustering-based methods such as DeepCluster and SwAV maximize the MI. We also re-interpret the…Apple Machine Learning Research
Google at ICML 2023
Groups across Google actively pursue research in the field of machine learning (ML), ranging from theory and application. We build ML systems to solve deep scientific and engineering challenges in areas of language, music, visual processing, algorithm development, and more. We aim to build a more collaborative ecosystem with the broader ML research community through open-sourcing tools and datasets, publishing our work, and actively participating in conferences.
Google is proud to be a Diamond Sponsor of the 40th International Conference on Machine Learning (ICML 2023), a premier annual conference, which is being held this week in Honolulu, Hawaii. As a leader in ML research, Google has a strong presence at this year’s conference with over 120 accepted papers and active involvement in a number of workshops and tutorials. Google is also proud to be a Platinum Sponsor for both the LatinX in AI and Women in Machine Learning workshops. We look forward to sharing some of our extensive ML research and expanding our partnership with the broader ML research community.
Registered for ICML 2023? We hope you’ll visit the Google booth to learn more about the exciting work, creativity, and fun that goes into solving a portion of the field’s most interesting challenges. Visit the @GoogleAI Twitter account to find out about Google booth activities (e.g., demos and Q&A sessions). See Google DeepMind’s blog to learn about their technical participation at ICML 2023.
Take a look below to learn more about the Google research being presented at ICML 2023 (Google affiliations in bold).
Board and Organizing Committee
Board Members include: Corinna Cortes, Hugo Larochelle
Tutorial Chairs include: Hanie Sedghi
Google Research booth activities
Presenters: Bryan Perozzi, Anton Tsitsulin, Brandon Mayer
Title: Unsupervised Graph Embedding @ Google (paper, EXPO workshop)
Tuesday, July 25th at 10:30 AM HST
Presenters: Zheng Xu
Title: Federated Learning of Gboard Language Models with Differential Privacy (paper 1, paper 2, blog post)
Tuesday, July 25th at 3:30 PM HST
Presenters: Thomas Kipf
Title: Self-supervised scene understanding (paper 1, paper 2)
Wednesday, July 26th at 10:30 AM HST
Presenters: Johannes von Oswald, Max Vladymyrov
Title: Transformers learn in-context by gradient descent (paper)
Wednesday, July 26th at 3:30 PM HST
Accepted papers
Scaling Vision Transformers to 22 Billion Parameters (see blog post)
Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme, Matthias Minderer, Joan Puigcerver, Utku Evci, Manoj Kumar, Sjoerd van Steenkiste, Gamaleldin F. Elsayed, Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmijn Bastings, Mark Patrick Collier, Alexey Gritsenko, Vighnesh Birodkar, Cristina Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Pavetić, Dustin Tran, Thomas Kipf, Mario Lučić, Xiaohua Zhai, Daniel Keysers, Jeremiah Harmsen, Neil Houlsby
Fast Inference from Transformers via Speculative Decoding
Yaniv Leviathan, Matan Kalman, Yossi Matias
Best of Both Worlds Policy Optimization
Christoph Dann, Chen-Yu Wei, Julian Zimmert
Inflow, Outflow, and Reciprocity in Machine Learning
Mukund Sundararajan, Walid Krichene
Transformers Learn In-Context by Gradient Descent
Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, Max Vladymyrov
Arithmetic Sampling: Parallel Diverse Decoding for Large Language Models
Luke Vilnis, Yury Zemlyanskiy, Patrick Murray*, Alexandre Passos*, Sumit Sanghai
Differentially Private Hierarchical Clustering with Provable Approximation Guarantees (see blog post)
Jacob Imola*, Alessandro Epasto, Mohammad Mahdian, Vincent Cohen-Addad, Vahab Mirrokni
Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning
Christopher A. Choquette-Choo, H. Brendan McMahan, Keith Rush, Abhradeep Thakurta
Random Classification Noise Does Not Defeat All Convex Potential Boosters Irrespective of Model Choice
Yishay Mansour, Richard Nock, Robert Williamson
Simplex Random Features
Isaac Reid, Krzysztof Choromanski, Valerii Likhosherstov, Adrian Weller
Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding
Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova
Mu2SLAM: Multitask, Multilingual Speech and Language Models
Yong Cheng, Yu Zhang, Melvin Johnson, Wolfgang Macherey, Ankur Bapna
Robust Budget Pacing with a Single Sample
Santiago Balseiro, Rachitesh Kumar*, Vahab Mirrokni, Balasubramanian Sivan, Di Wang
A Statistical Perspective on Retrieval-Based Models
Soumya Basu, Ankit Singh Rawat, Manzil Zaheer
Approximately Optimal Core Shapes for Tensor Decompositions
Mehrdad Ghadiri, Matthew Fahrbach, Gang Fu, Vahab Mirrokni
Efficient List-Decodable Regression Using Batches
Abhimanyu Das, Ayush Jain*, Weihao Kong, Rajat Sen
Efficient Training of Language Models Using Few-Shot Learning
Sashank J. Reddi, Sobhan Miryoosefi, Stefani Karp, Shankar Krishnan, Satyen Kale, Seungyeon Kim, Sanjiv Kumar
Fully Dynamic Submodular Maximization Over Matroids
Paul Duetting, Federico Fusco, Silvio Lattanzi, Ashkan Norouzi-Fard, Morteza Zadimoghaddam
GFlowNet-EM for Learning Compositional Latent Variable Models
Edward J Hu, Nikolay Malkin, Moksh Jain, Katie Everett, Alexandros Graikos, Yoshua Bengio
Improved Online Learning Algorithms for CTR Prediction in Ad Auctions
Zhe Feng, Christopher Liaw, Zixin Zhou
Large Language Models Struggle to Learn Long-Tail Knowledge
Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, Colin Raffel
Multi-channel Autobidding with Budget and ROI Constraints
Yuan Deng, Negin Golrezaei, Patrick Jaillet, Jason Cheuk Nam Liang, Vahab Mirrokni
Multi-layer Neural Networks as Trainable Ladders of Hilbert Spaces
Zhengdao Chen
On User-Level Private Convex Optimization
Badih Ghazi, Pritish Kamath, Ravi Kumar, Raghu Meka, Pasin Manurangsi, Chiyuan Zhang
PAC Generalization via Invariant Representations
Advait U Parulekar, Karthikeyan Shanmugam, Sanjay Shakkottai
Regularization and Variance-Weighted Regression Achieves Minimax Optimality in Linear MDPs: Theory and Practice
Toshinori Kitamura, Tadashi Kozuno, Yunhao Tang, Nino Vieillard, Michal Valko, Wenhao Yang, Jincheng Mei, Pierre Menard, Mohammad Gheshlaghi Azar, Remi Munos, Olivier Pietquin, Matthieu Geist,Csaba Szepesvari, Wataru Kumagai, Yutaka Matsuo
Speeding Up Bellman Ford via Minimum Violation Permutations
Silvio Lattanzi, Ola Svensson, Sergei Vassilvitskii
Statistical Indistinguishability of Learning Algorithms
Alkis Kalavasis, Amin Karbasi, Shay Moran, Grigoris Velegkas
Test-Time Adaptation with Slot-Centric Models
Mihir Prabhudesai, Anirudh Goyal, Sujoy Paul, Sjoerd van Steenkiste, Mehdi S. M. Sajjadi, Gaurav Aggarwal, Thomas Kipf, Deepak Pathak, Katerina Fragkiadaki>
Algorithms for Bounding Contribution for Histogram Estimation Under User-Level Privacy
Yuhan Liu*, Ananda Theertha Suresh, Wennan Zhu, Peter Kairouz, Marco Gruteser
Bandit Online Linear Optimization with Hints and Queries
Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit
CLUTR: Curriculum Learning via Unsupervised Task Representation Learning
Abdus Salam Azad, Izzeddin Gur, Jasper Emhoff, Nathaniel Alexis, Aleksandra Faust, Pieter Abbeel, Ion Stoica
CSP: Self-Supervised Contrastive Spatial Pre-training for Geospatial-Visual Representations
Gengchen Mai, Ni Lao, Yutong He, Jiaming Song, Stefano Ermon
Ewald-Based Long-Range Message Passing for Molecular Graphs
Arthur Kosmala, Johannes Gasteiger, Nicholas Gao, Stephan Günnemann
Fast (1+ε)-Approximation Algorithms for Binary Matrix Factorization
Ameya Velingker, Maximilian Vötsch, David Woodruff, Samson Zhou
Federated Linear Contextual Bandits with User-Level Differential Privacy
Ruiquan Huang, Huanyu Zhang, Luca Melis, Milan Shen, Meisam Hejazinia, Jing Yang
Investigating the Role of Model-Based Learning in Exploration and Transfer
Jacob C Walker, Eszter Vértes, Yazhe Li, Gabriel Dulac-Arnold, Ankesh Anand, Theophane Weber, Jessica B Hamrick
Label Differential Privacy and Private Training Data Release
Robert Busa-Fekete, Andres Munoz, Umar Syed, Sergei Vassilvitskii
Lifelong Language Pretraining with Distribution-Specialized Experts
Wuyang Chen*, Yanqi Zhou, Nan Du, Yanping Huang, James Laudon, Zhifeng Chen, Claire Cui
Multi-User Reinforcement Learning with Low Rank Rewards
Dheeraj Mysore Nagaraj, Suhas S Kowshik, Naman Agarwal, Praneeth Netrapalli, Prateek Jain
Multi-View Masked World Models for Visual Robotic Manipulation
Younggyo Seo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, Pieter Abbeel
PaLM-E: An Embodied Multimodal Language Model (see blog post)
Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter,Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, Pete Florence
Private Federated Learning with Autotuned Compression
Enayat Ullah*, Christopher A. Choquette-Choo, Peter Kairouz, Sewoong Oh
Refined Regret for Adversarial MDPs with Linear Function Approximation
Yan Dai, Haipeng Luo, Chen-Yu Wei, Julian Zimmert
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
Justin Cui, Ruoche Wan, Si Si, Cho-Jui Hsieh
SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to Unknown Parameters, Unbounded Gradients and Affine Variance
Amit Attia, Tomer Koren
The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation
Mark Rowland, Yunhao Tang, Clare Lyle, Rémi Munos, Marc G. Bellemare, Will Dabney
Unveiling The Mask of Position-Information Pattern Through the Mist of Image Features
Chieh Hubert Lin, Hung-Yu Tseng, Hsin-Ying Lee, Maneesh Kumar Singh, Ming-Hsuan Yang
User-Level Private Stochastic Convex Optimization with Optimal Rates
Raef Bassily, Ziteng Sun
A Simple Zero-Shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models
James Urquhart Allingham*, Jie Ren, Michael W Dusenberry, Xiuye Gu, Yin Cui, Dustin Tran, Jeremiah Zhe Liu, Balaji Lakshminarayanan
Can Large Language Models Reason About Program Invariants?
Kexin Pei, David Bieber, Kensen Shi, Charles Sutton, Pengcheng Yin
Concurrent Shuffle Differential Privacy Under Continual Observation
Jay Tenenbaum, Haim Kaplan, Yishay Mansour, Uri Stemmer
Constant Matters: Fine-Grained Error Bound on Differentially Private Continual Observation
Hendrik Fichtenberger, Monika Henzinger, Jalaj Upadhyay
Cross-Entropy Loss Functions: Theoretical Analysis and Applications
Anqi Mao, Mehryar Mohri, Yutao Zhong
Efficient Rate Optimal Regret for Adversarial Contextual MDPs Using Online Function Approximation
Orin Levy, Alon Cohen, Asaf Cassel, Yishay Mansour
Fairness in Streaming Submodular Maximization Over a Matroid Constraint
Marwa El Halabi, Federico Fusco, Ashkan Norouzi-Fard, Jakab Tardos, Jakub Tarnawski
The Flan Collection: Designing Data and Methods for Effective Instruction Tuning (see blog post)
Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, Adam Roberts
Graph Reinforcement Learning for Network Control via Bi-level Optimization
Daniele Gammelli, James Harrison, Kaidi Yang, Marco Pavone, Filipe Rodrigues, Francisco C. Pereira
Learning-Augmented Private Algorithms for Multiple Quantile Release
Mikhail Khodak*, Kareem Amin, Travis Dick, Sergei Vassilvitskii
LegendreTron: Uprising Proper Multiclass Loss Learning
Kevin H Lam, Christian Walder, Spiridon Penev, Richard Nock
Measuring the Impact of Programming Language Distribution
Gabriel Orlanski*, Kefan Xiao, Xavier Garcia, Jeffrey Hui, Joshua Howland, Jonathan Malmaud, Jacob Austin, Rishabh Singh, Michele Catasta*
Multi-task Differential Privacy Under Distribution Skew
Walid Krichene, Prateek Jain, Shuang Song, Mukund Sundararajan, Abhradeep Thakurta, Li Zhang
Muse: Text-to-Image Generation via Masked Generative Transformers
Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, José Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, Dilip Krishnan
On the Convergence of Federated Averaging with Cyclic Client Participation
Yae Jee Cho, Pranay Sharma, Gauri Joshi, Zheng Xu, Satyen Kale, Tong Zhang
Optimal Stochastic Non-smooth Non-convex Optimization Through Online-to-Non-convex Conversion
Ashok Cutkosky, Harsh Mehta, Francesco Orabona
Out-of-Domain Robustness via Targeted Augmentations
Irena Gao, Shiori Sagawa, Pang Wei Koh, Tatsunori Hashimoto, Percy Liang
Polynomial Time and Private Learning of Unbounded Gaussian Mixture Models
Jamil Arbas, Hassan Ashtiani, Christopher Liaw
Pre-computed Memory or On-the-Fly Encoding? A Hybrid Approach to Retrieval Augmentation Makes the Most of Your Compute
Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Joshua Ainslie, Sumit Sanghai, Fei Sha, William W. Cohen
Scalable Adaptive Computation for Iterative Generation
Allan Jabri*, David J. Fleet, Ting Chen
Scaling Spherical CNNs
Carlos Esteves, Jean-Jacques Slotine, Ameesh Makadia
STEP: Learning N:M Structured Sparsity Masks from Scratch with Precondition
Yucheng Lu, Shivani Agrawal, Suvinay Subramanian, Oleg Rybakov, Christopher De Sa, Amir Yazdanbakhsh
Stratified Adversarial Robustness with Rejection
Jiefeng Chen, Jayaram Raghuram, Jihye Choi, Xi Wu, Yingyu Liang, Somesh Jha
When Does Privileged information Explain Away Label Noise?
Guillermo Ortiz-Jimenez*, Mark Collier, Anant Nawalgaria, Alexander D’Amour, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou
Adaptive Computation with Elastic Input Sequence
Fuzhao Xue*, Valerii Likhosherstov, Anurag Arnab, Neil Houlsby, Mostafa Dehghani, Yang You
Can Neural Network Memorization Be Localized?
Pratyush Maini, Michael C. Mozer, Hanie Sedghi, Zachary C. Lipton, J. Zico Kolter, Chiyuan Zhang
Controllability-Aware Unsupervised Skill Discovery
Seohong Park, Kimin Lee, Youngwoon Lee, Pieter Abbeel
Efficient Learning of Mesh-Based Physical Simulation with Bi-Stride Multi-Scale Graph Neural Network
Yadi Cao, Menglei Chai, Minchen Li, Chenfanfu Jiang
Federated Heavy Hitter Recovery Under Linear Sketching
Adria Gascon, Peter Kairouz, Ziteng Sun, Ananda Theertha Suresh
Graph Generative Model for Benchmarking Graph Neural Networks
Minji Yoon, Yue Wu, John Palowitch, Bryan Perozzi, Russ Salakhutdinov
H-Consistency Bounds for Pairwise Misranking Loss Surrogates
Anqi Mao, Mehryar Mohri, Yutao Zhong
Improved Regret for Efficient Online Reinforcement Learning with Linear Function Approximation
Uri Sherman, Tomer Koren, Yishay Mansour
Invariant Slot Attention: Object Discovery with Slot-Centric Reference Frames
Ondrej Biza*, Sjoerd van Steenkiste, Mehdi S. M. Sajjadi, Gamaleldin Fathy Elsayed, Aravindh Mahendran, Thomas Kipf
Multi-task Off-Policy Learning from Bandit Feedback
Joey Hong, Branislav Kveton, Manzil Zaheer, Sumeet Katariya, Mohammad Ghavamzadeh
Optimal No-Regret Learning for One-Sided Lipschitz Functions
Paul Duetting, Guru Guruganesh, Jon Schneider, Joshua Ruizhi Wang
Policy Mirror Ascent for Efficient and Independent Learning in Mean Field Games
Batuhan Yardim, Semih Cayci, Matthieu Geist, Niao He
Regret Minimization and Convergence to Equilibria in General-Sum Markov Games
Liad Erez, Tal Lancewicki, Uri Sherman, Tomer Koren, Yishay Mansour
Reinforcement Learning Can Be More Efficient with Multiple Rewards
Christoph Dann, Yishay Mansour, Mehryar Mohri
Reinforcement Learning with History-Dependent Dynamic Contexts
Guy Tennenholtz, Nadav Merlis, Lior Shani, Martin Mladenov, Craig Boutlier
User-Defined Event Sampling and Uncertainty Quantification in Diffusion Models for Physical Dynamical Systems
Marc Anton Finzi*, Anudhyan Boral, Andrew Gordon Wilson, Fei Sha, Leonardo Zepeda-Nunez
Discrete Key-Value Bottleneck
Frederik Träuble, Anirudh Goyal, Nasim Rahaman, Michael Curtis Mozer, Kenji Kawaguchi, Yoshua Bengio, Bernhard Schölkopf
DSGD-CECA: Decentralized SGD with Communication-Optimal Exact Consensus Algorithm
Lisang Ding, Kexin Jin, Bicheng Ying, Kun Yuan, Wotao Yin
Exphormer: Sparse Transformers for Graphs
Hamed Shirzad, Ameya Velingker, Balaji Venkatachalam, Danica J. Sutherland, Ali Kemal Sinop
Fast, Differentiable and Sparse Top-k: A Convex Analysis Perspective
Michael Eli Sander*, Joan Puigcerver, Josip Djolonga, Gabriel Peyré, Mathieu Blondel
Improved Policy Evaluation for Randomized Trials of Algorithmic Resource Allocation
Aditya Mate, Bryan Wilder, Aparna Taneja, Milind Tambe
In Search for a Generalizable Method for Source Free Domain Adaptation
Malik Boudiaf*, Tom Denton, Bart van Merrienboer, Vincent Dumoulin, Eleni Triantafillou
Learning Rate Schedules in the Presence of Distribution Shift
Matthew Fahrbach, Adel Javanmard, Vahab Mirrokni, Pratik Worah
Not All Semantics Are Created Equal: Contrastive Self-Supervised Learning with Automatic Temperature Individualization
Zi-Hao Qiu, Quanqi Hu, Zhuoning Yuan, Denny Zhou, Lijun Zhang, Tianbao Yang
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi*, Krikamol Muandet, Simon Kornblith, Bernhard Schölkopf, Been Kim
On the Role of Attention in Prompt-Tuning
Samet Oymak, Ankit Singh Rawat, Mahdi Soltanolkotabi, Christos Thrampoulidis
PLay: Parametrically Conditioned Layout Generation Using Latent Diffusion
Chin-Yi Cheng, Forrest Huang, Gang Li, Yang Li
The Power of Learned Locally Linear Models for Nonlinear Policy Optimization
Daniel Pfrommer, Max Simchowitz, Tyler Westenbroek, Nikolai Matni, Stephen Tu
Relevant Walk Search for Explaining Graph Neural Networks
Ping Xiong, Thomas Schnake, Michael Gastegger, Grégoire Montavon, Klaus Robert Muller,Shinichi Nakajima
Repository-Level Prompt Generation for Large Language Models of Code
Disha Shrivastava, Hugo Larochelle, Daniel Tarlow
Robust and Private Stochastic Linear Bandits
Vasileios Charisopoulos*, Hossein Esfandiari, Vahab Mirrokni
Simple Diffusion: End-to-End Diffusion for High Resolution Images
Emiel Hoogeboom, Jonathan Heek, Tim Salimans
Tied-Augment: Controlling Representation Similarity Improves Data Augmentation
Emirhan Kurtulus, Zichao Li, Yann Dauphin, Ekin D. Cubuk
Why Is Public Pre-Training Necessary for Private Model Training?
Arun Ganesh, Mahdi Haghifam*, Milad Nasr, Sewoong Oh, Thomas Steinke, Om Thakkar, Abhradeep Guha Thakurta, Lun Wang
A Connection Between One-Step RL and Critic Regularization in Reinforcement Learning
Benjamin Eysenbach, Matthieu Geist, Sergey Levine, Ruslan Salakhutdinov
Beyond Uniform Lipschitz Condition in Differentially Private Optimization
Rudrajit Das*, Satyen Kale, Zheng Xu, Tong Zhang, Sujay Sanghavi
Efficient Graph Field Integrators Meet Point Clouds
Krzysztof Choromanski, Arijit Sehanobish, Han Lin, Yunfan Zhao, Eli Berger, Tetiana Parshakova, Alvin Pan, David Watkins, Tianyi Zhang, Valerii Likhosherstov, Somnath Basu Roy Chowdhury, Avinava Dubey, Deepali Jain, Tamas Sarlos, Snigdha Chaturvedi, Adrian Weller
Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
Riade Benbaki, Wenyu Chen, Xiang Meng, Hussein Hazimeh, Natalia Ponomareva, Zhe Zhao, Rahul Mazumder
Jump-Start Reinforcement Learning (see blog post)
Ikechukwu Uchendu*, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Joséphine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, Karol Hausman
Learning in POMDPs is Sample-Efficient with Hindsight Observability
Jonathan Lee, Alekh Agarwal, Christoph Dann, Tong Zhang
Low-Variance Gradient Estimation in Unrolled Computation Graphs with ES-Single
Paul Vicol
Masked Trajectory Models for Prediction, Representation, and Control
Philipp Wu, Arjun Majumdar, Kevin Stone, Yixin Lin, Igor Mordatch, Pieter Abbeel, Aravind Rajeswaran
Overcoming Simplicity Bias in Deep Networks Using a Feature Sieve
Rishabh Tiwari, Pradeep Shenoy
Pairwise Ranking Losses of Click-Through Rates Prediction for Welfare Maximization in Ad Auctions
Boxiang Lyu, Zhe Feng, Zachary Robertson, Sanmi Koyejo
Predictive Flows for Faster Ford-Fulkerson
Sami Davies, Benjamin Moseley, Sergei Vassilvitskii, Yuyan Wang
Scaling Laws for Multilingual Neural Machine Translation
Patrick Fernandes, Behrooz Ghorbani, Xavier Garcia, Markus Freitag, Orhan Firat
Sequential Monte Carlo Learning for Time Series Structure Discovery
Feras Saad, Brian Patton, Matthew Douglas Hoffman, Rif A. Saurous, Vikash Mansinghka
Stochastic Gradient Succeeds for Bandits
Jincheng Mei, Zixin Zhong, Bo Dai, Alekh Agarwal, Csaba Szepesvari, Dale Schuurmans
Subset-Based Instance Optimality in Private Estimation
Travis Dick, Alex Kulesza, Ziteng Sun, Ananda Theertha Suresh
The Unreasonable Effectiveness of Few-Shot Learning for Machine Translation
Xavier Garcia, Yamini Bansal, Colin Cherry, George Foster, Maxim Krikun, Melvin Johnson, Orhan Firat
Tutorials
Self-Supervised Learning in Vision: from Research Advances to Best Practices
Xinlei Chen, Ishan Misra, Randall Balestriero, Mathilde Caron, Christoph Feichtenhofer, Mark Ibrahim
How to DP-fy ML: A Practical Tutorial to Machine Learning with Differential Privacy (see blog post)
Sergei Vassilvitskii, Natalia Ponomareva, Zheng Xu
Recent Advances in the Generalization Theory of Neural Networks
Tengyu Ma, Alex Damian
EXPO Day workshops
Graph Neural Networks in Tensorflow: A Practical Guide
Workshop Organizers include: Bryan Perozzi, Anton Tsitsulin, Brandon Mayer, Jonathan Halcrow
Google sponsored affinity workshops
LatinX in AI (LAXAI)
Platinum Sponsor
Keynote Speaker: Monica Ribero
Panelist: Yao Qin
Women in Machine Learning (WiML)
Platinum Sponsor
Panelists: Yao Qin
Workshops
Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities
Organizer: Peter Kairouz, Zheng Xu
Speaker: Brendan McMahan
Interpretable Machine Learning in Healthcare (IMLH)
Organizer: Ramin Zabih
Knowledge and Logical Reasoning in the Era of Data-Driven Learning
Organizer: Beliz Günel
The Many Facets of Preference-Based Learning (MFPL)
Organizer: Robert Busa-Fekete, Mohammad Ghavamzadeh
The Synergy of Scientific and Machine Learning Modelling (SynS & ML)
Speaker: Sercan Arik
Theory of Mind in Communicating Agents
Organizer: Pei Zhou
Artificial Intelligence & Human Computer Interaction
Organizer: Yang Li, Forrest Huang
Data-Centric Machine Learning Research (DMLR)
Organizer: Alicia Parrish, Najoung Kim
Speaker: Peter Mattson
Neural Compression: from Information Theory to Applications
Speaker: Johannes Ballé
Panelist: George Toderici
Organizer: Ahmad Beirami
Spurious Correlations, Invariance and Stability (SCIS)
Organizer: Amir Feder
* Work done while at Google
A quick guide to Amazon’s papers at ICML
Across a range of topics, Amazon research blends the theoretical and the practical.Read More
Analyze rodent infestation using Amazon SageMaker geospatial capabilities
Rodents such as rats and mice are associated with a number of health risks and are known to spread more than 35 diseases. Identifying regions of high rodent activity can help local authorities and pest control organizations plan for interventions effectively and exterminate the rodents.
In this post, we show how to monitor and visualize a rodent population using Amazon SageMaker geospatial capabilities. We then visualize rodent infestation effects on vegetation and bodies of water. Finally, we correlate and visualize the number of monkey pox cases reported with rodent sightings in a region. Amazon SageMaker makes it easier for data scientists and machine learning (ML) engineers to build, train, and deploy models using geospatial data. The tool makes it easier to access geospatial data sources, run purpose-built processing operations, apply pre-trained ML models, and use built-in visualization tools faster and at scale.
Notebook
First, we use an Amazon SageMaker Studio notebook with a geospatial image by following the steps outlined in Getting Started with Amazon SageMaker geospatial capabilities.
Data access
The geospatial image comes preinstalled with SageMaker geospatial capabilities that make it easier to enrich data for geospatial analysis and ML. For our post, we use satellite images from Sentinel-2 and the rodent activity and monkeypox datasets from open-source NYC open data.
First, we use the rodent activity and extract the latitude and longitude of rodent sightings and inspections. Then we enrich this location information with human-readable street addresses. We create a vector enrichment job (VEJ) in the SageMaker Studio notebook to run a reverse geocoding operation so that you can convert geographic coordinates (latitude, longitude) to human-readable addresses, powered by Amazon Location Service. We create the VEJ as follows:
Visualize rodent activity in a region
Now we can use SageMaker geospatial capabilities to visualize rodent sightings. After the VEJ is complete, we export the output of the job to an Amazon S3 bucket.
When the export is complete, you will see the output CSV file in your Amazon Simple Storage Service (Amazon S3) bucket, which consists of your input data (longitude and latitude coordinates) along with additional columns: address number, country, label, municipality, neighborhood, postal code, and region of that location appended at the end.
From the output file generated by VEJ, we can use SageMaker geospatial capabilities to overlay the output on a base map and provide layered visualization to make collaboration easier. SageMaker geospatial capabilities provide built-in visualization tooling powered by Foursquare Studio, which natively works from within a SageMaker notebook via the SageMaker geospatial Map SDK. Below, we can visualize the rodent sightings and also get the human readable addresses for each of the data points. The address information of each of the rodent sightings data points can be useful for rodent inspection and treatment purposes.
Analyze the effects of rodent infestation on vegetation and bodies of water
To analyze the effects of rodent infestation on vegetation and bodies of water, we need to classify each location as vegetation, water, and bare ground. Let’s look at how we can use these geospatial capabilities to perform this analysis.
The new geospatial capabilities in SageMaker offer easier access to geospatial data such as Sentinel-2 and Landsat 8. Built-in geospatial dataset access saves weeks of effort otherwise lost to collecting and processing data from various data providers and vendors. Also, these geospatial capabilities offer a pre-trained Land Use Land Cover (LULC) segmentation model to identify the physical material, such as vegetation, water, and bare ground, at the earth surface.
We use this LULC ML model to analyze the effects of rodent population on vegetation and bodies of water.
In the following code snippet, we first define the area of interest coordinates (aoi_coords
) of New York City. Then we create an Earth Observation Job (EOJ) and select the LULC operation. SageMaker downloads and preprocesses the satellite image data for the EOJ. Next, SageMaker automatically runs model inference for the EOJ. The runtime of the EOJ will vary from several minutes to hours depending on the number of images processed. You can monitor the status of EOJs using the get_earth_observation_job
function, and visualize the input and output of the EOJ in the map.
To visualize the rodent population with respect to vegetation, we overlay the rodent population and sighting data on the land cover segmentation model predictions. This visualization can help us locate the population of rodents and analyze it on vegetation and bodies of water.
Visualize monkeypox cases and corelating with rodent data
To visualize the relation between the monkeypox cases and rodent sightings, we add the monkeypox dataset and the geoJSON file for New York City borough boundaries. See the following code:
Within a SageMaker Studio notebook, we can use the visualization tool powered by Foursquare to add layers in the map and add charts. Here, we added the monkeypox data as a chart to show the number of monkeypox cases for each of the boroughs. To see the correlation between monkeypox cases and rodent sightings, we have added the borough boundaries as a polygon layer and added the heatmap layer that represents rodent activity. The borough boundary layer is colored to match the monkeypox data chart. As we can see, the borough of Manhattan exhibits a high concentration of rodent sightings and records the highest number of monkeypox cases, followed by Brooklyn.
This is supported by a simple statistical analysis of calculating the correlation between the concentration of rodent sightings and monkeypox cases in each borough. The calculation produced an r value of 0.714, which implies a positive correlation.
Conclusion
In this post, we demonstrated how you can use SageMaker geospatial capabilities to get detailed addresses of rodent sightings and visualize the rodent effects on vegetation and bodies of water. This can help local authorities and pest control organizations plan for interventions effectively and exterminate rodents. We also correlated the rodent sightings to monkeypox cases in the area with the built-in visualization tool. By utilizing vector enrichment and EOJs along with the built-in visualization tools, SageMaker geospatial capabilities eliminate the challenges of handling large-scale geospatial datasets, model training, and inference, and provide the ability to rapidly explore predictions and geospatial data on an interactive map using 3D accelerated graphics and built-in visualization tools.
You can get started with SageMaker geospatial capabilities in two ways:
- Through the SageMaker geospatial UI, as a part of SageMaker Studio UI
- Through SageMaker notebooks with a SageMaker geospatial image
To learn more, visit Amazon SageMaker geospatial capabilities and Getting Started with Amazon SageMaker geospatial capabilitites. Also, visit our GitHub repo, which has several example notebooks on SageMaker geospatial capabilities.
About the authors
Bunny Kaushik is a Solutions Architect at AWS. He is passionate about building AI/ML solutions and helping customers innovate on the AWS platform. Outside of work, he enjoys hiking, rock climbing, and swimming.
Clarisse Vigal is a Sr. Technical Account Manager at AWS, focused on helping customers accelerate their cloud adoption journey. Outside of work, Clarisse enjoys traveling, hiking, and reading sci-fi thrillers.
Veda Raman is a Senior Specialist Solutions Architect for machine learning based in Maryland. Veda works with customers to help them architect efficient, secure and scalable machine learning applications. Veda is interested in helping customers leverage serverless technologies for Machine learning.
Microsoft at ICML 2023: Discoveries and advancements in machine learning
Machine learning’s rapid emergence and pervasive impact has revolutionized industries and societies across the globe. Its ability to extract insights, recognize patterns, and make intelligent predictions from vast amounts of data has paved the way for a new era of progress. From traffic and weather prediction to speech pattern recognition and advanced medical diagnostics, machine learning has been shattering the boundaries of possibility, inviting us to explore new frontiers of innovation.
The International Conference on Machine Learning (ICML 2023) serves as a global platform where researchers, academics, and industry professionals gather to share their pioneering work and advancements in the field of machine learning. As a supporter of machine learning research, Microsoft takes an active role in ICML, not only as a sponsor but also as a significant research contributor.
The breadth of contributions from Microsoft researchers and their collaborators at ICML reflects the various and diverse possibilities for applying machine learning.
SPOTLIGHT: AI focus area
Here are some of the highlights:
Oral sessions
BEATs: Audio Pre-Training with Acoustic Tokenizers
Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Daniel Tompkins, Zhuo Chen, and Furu Wei explore the growth of self-supervised learning (SSL) across language, vision, speech, and audio domains. They propose an iterative framework, BEATs, which combines acoustic tokenizers and audio SSL models and promotes semantic-rich discrete label prediction, facilitating the abstraction of high-level audio semantics. Experimental results demonstrate BEATs’ effectiveness, achieving state-of-the-art performance on various audio classification benchmarks, including AudioSet-2M and ESC-50.
Representation Learning with Multi-Step Inverse Kinematics: An Efficient and Optimal Approach to Rich-Observation RL
Zakaria Mhammedi, Dylan Foster, and Alexander Rakhlin introduce MusIK, a computationally efficient algorithm for sample-efficient reinforcement learning with complex observations. MusIK overcomes limitations of existing methods by achieving rate-optimal sample complexity and minimal statistical assumptions. It combines systematic exploration with multi-step inverse kinematics to predict the learner’s future actions based on current observations.
Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies
Gati Aher, Rosa Arriaga, and Adam Tauman Kalai present the Turing Experiment (TE), a novel approach for evaluating how well language models can simulate different aspects of human behavior. Unlike the traditional Turing Test, a TE requires representative samples of participants from human subject research. The methodology enables the replication of well-established findings in economic, psycholinguistic, and social psychology experiments, such as the Ultimatum Game, Garden Path Sentences, Milgram Shock Experiment, and Wisdom of Crowds. Results demonstrate successful replication in the first three TEs, while uncovering a “hyper-accuracy distortion” in some language models during the last TE.
Other paper highlights
Bayesian Estimation of Differential Privacy
Differentially private stochastic gradient descent (SGD) algorithms provide formal privacy guarantees for training ML models, offering better protection against practical attacks. Researchers estimate protection levels using ε confidence intervals from membership inference attacks, but obtaining actionable intervals requires training an impractically large number of models. Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Ahmed Salem, Victor Ruehle, Andrew Paverd, Mohammad Naseri, Boris Köpf, and Daniel Jones propose a novel, more efficient Bayesian approach that brings privacy estimates within reach of practitioners. It reduces sample size by computing a posterior for ε from the joint posterior of the false positive and false negative rates of membership inference attacks. This approach also implements an end-to-end system for privacy estimation that integrates our approach and state-of-the-art membership inference attacks and evaluates it on text and vision classification tasks.
Magneto: A Foundation Transformer
Model architectures across language, vision, speech, and multimodal are converging. Despite being called “transformers,” these areas use different implementations for better performance. Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, Barun Patra, Zhun Liu, Vishrav Chaudhary, Xia Song, and Furu Wei call for developing a foundation transformer for true general-purpose modeling to serve as a go-to architecture for various tasks and modalities with guaranteed training stability. This work introduces Magneto, a transformer variant, to meet that goal. The authors propose Sub-LayerNorm for good expressivity and the initialization strategy theoretically derived from DeepNet for stable scaling up. Extensive experiments demonstrate its superior performance and better stability than de facto transformer variants designed for various applications, including language modeling, machine translation, vision pretraining, speech recognition, and multimodal pretraining.
NeuralStagger: Accelerating Physics-Constrained Neural PDE Solver with Spatial-Temporal Decomposition
Neural networks accelerate partial differential equation (PDE) solutions but need physics constraints for generalization and to reduce reliance on data. Ensuring accuracy and stability requires resolving smallest scaled physics, increasing computational costs due to large inputs, outputs, and networks. Xinquan Huang, Wenlei Shi, Qi Meng, Yue Wang, Xiaotian Gao, Jia Zhang, and Tie-Yan Liu propose an acceleration methodology, NeuralStagger, which spatially and temporally decomposes the original learning tasks into several coarser-resolution subtasks. They define a coarse-resolution neural solver for each subtask, requiring fewer computational resources, and jointly train them with a physics-constrained loss. The solution is achieved quickly thanks to perfect parallelism, while trained solvers provide the flexibility to simulate at various resolutions.
Streaming Active Learning with Deep Neural Networks
Active learning is perhaps most naturally posed as an online learning problem. However, prior active learning approaches with deep neural networks assume offline access to the entire dataset ahead of time. Akanksha Saran, Safoora Yousefi, Akshay Krishnamurthy, John Langford, and Jordan Ash propose VeSSAL, a new algorithm for batch active learning with deep neural networks in streaming settings, which samples groups of points to query for labels at the moment they are encountered. The approach trades off between the uncertainty and diversity of queried samples to match a desired query rate without requiring any hand-tuned hyperparameters. This paper expands the applicability of deep neural networks to realistic active learning scenarios, such as applications relevant to HCI and large fractured datasets.
For the complete list of accepted publications by Microsoft researchers, please see the publications list on Microsoft at ICML 2023.
The post Microsoft at ICML 2023: Discoveries and advancements in machine learning appeared first on Microsoft Research.
Moving AI governance forward
OpenAI and other leading labs reinforce AI safety, security and trustworthiness through voluntary commitments.OpenAI Blog
Using AI to fight climate change
AI is a powerful technology that will transform our future, so how can we best apply it to help combat climate change and find sustainable solutions?Read More
Learning Iconic Scenes with Differential Privacy
Apple Machine Learning Research
Using AI to fight climate change
AI is a powerful technology that will transform our future, so how can we best apply it to help combat climate change and find sustainable solutions? The effects of climate change on Earth’s ecosystems are incredibly complex, and as part of our effort to use AI for solving some of the world’s most challenging problems, here are some of the ways we’re working to advance our understanding, optimise existing systems, and accelerate breakthrough science of climate and its effects.Read More