The International Conference on Machine Learning (ICML) 2020 is being hosted virtually from July 13th – July 18th. We’re excited to share all the work from SAIL that’s being presented, and you’ll find links to papers, videos and blogs below. Feel free to reach out to the contact authors directly to learn more about the work that’s happening at Stanford!
List of Accepted Papers
Active World Model Learning in Agent-rich Environments with Progress Curiosity
Authors: Kuno Kim, Megumi Sano, Julian De Freitas, Nick Haber, Daniel Yamins
Contact: khkim@cs.stanford.edu
Links: | Video
Keywords: curiosity, active learning, world models, animacy, attention
Graph Structure of Neural Networks
Authors: Jiaxuan You, Jure Leskovec, Kaiming He, Saining Xie
Contact: jiaxuan@stanford.edu
Keywords: neural network design, network science, deep learning
A Distributional Framework For Data Valuation
Authors: Amirata Ghorbani, Michael P. Kim, James Zou
Contact: jamesz@stanford.edu
Links: Paper
Keywords: shapley value, data valuation, machine learning, data markets
A General Recurrent State Space Framework for Modeling Neural Dynamics During Decision-Making
Authors: David Zoltowski, Jonathan Pillow, Scott Linderman
Contact: scott.linderman@stanford.edu
Links: Paper
Keywords: computational neuroscience, dynamical systems, variational inference
An Imitation Learning Approach for Cache Replacement
Authors: Evan Zheran Liu, Milad Hashemi, Kevin Swersky, Parthasarathy Ranganathan, Junwhan Ahn
Contact: evanliu@cs.stanford.edu
Links: Paper
Keywords: imitation learning, cache replacement, benchmark
An Investigation of Why Overparameterization Exacerbates Spurious Correlations
Authors: Shiori Sagawa*, Aditi Raghunathan*, Pang Wei Koh*, Percy Liang
Contact: ssagawa@cs.stanford.edu
Links: Paper
Keywords: robustness, spurious correlations, overparameterization
Better Depth-Width Trade-offs for Neural Networks through the Lens of Dynamical Systems.
Authors: Vaggos Chatziafratis, Sai Ganesh Nagarajan, Ioannis Panageas
Contact: vaggos@cs.stanford.edu
Links: Paper
Keywords: expressivity, depth, width, dynamical systems
Bridging the Gap Between f-GANs and Wasserstein GANs
Authors: Jiaming Song, Stefano Ermon
Contact: jiaming.tsong@gmail.com
Links: Paper
Keywords: gans, generative models, f-divergence, wasserstein distance
Concept Bottleneck Models
Authors: Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, Percy Liang
Contact: pangwei@cs.stanford.edu
Links: Paper
Keywords: concepts, intervention, interpretability
Domain Adaptive Imitation Learning
Authors: Kuno Kim, Yihong Gu, Jiaming Song, Shengjia Zhao, Stefano Ermon
Contact: khkim@cs.stanford.edu
Links: Paper
Keywords: imitation learning, domain adaptation, reinforcement learning, generative adversarial networks, cycle consistency
Encoding Musical Style with Transformer Autoencoders
Authors: Kristy Choi, Curtis Hawthorne, Ian Simon, Monica Dinculescu, Jesse Engel
Contact: kechoi@cs.stanford.edu
Links: Paper | Blog Post | Video
Keywords: sequential, network, and time-series modeling; applications – music
Fair Generative Modeling via Weak Supervision
Authors: Kristy Choi, Aditya Grover, Trisha Singh, Rui Shu, Stefano Ermon
Contact: kechoi@cs.stanford.edu
Links: Paper | Video
Keywords: deep learning – generative models and autoencoders; fairness, equity, justice, and safety
Fast and Three-rious: Speeding Up Weak Supervision with Triplet Methods
Authors: Daniel Y. Fu, Mayee F. Chen, Frederic Sala, Sarah M. Hooper, Kayvon Fatahalian, Christopher Ré
Contact: danfu@cs.stanford.edu
Links: Paper | Blog Post | Video
Keywords: weak supervision, latent variable models
Flexible and Efficient Long-Range Planning Through Curious Exploration
Authors: Aidan Curtis, Minjian Xin, Dilip Arumugam, Kevin Feigelis, Daniel Yamins
Contact: yamins@stanford.edu
Links: Paper | Blog Post | Video
Keywords: planning, deep learning, sparse reinforcement learning, curiosity
FormulaZero: Distributionally Robust Online Adaptation via Offline Population Synthesis
Authors: Aman Sinha, Matthew O’Kelly, Hongrui Zheng, Rahul Mangharam, John Duchi, Russ Tedrake
Contact: amans@stanford.edu, mokelly@seas.upenn.edu
Links: Paper | Video
Keywords: distributional robustness, online learning, autonomous driving, reinforcement learning, simulation, mcmc
Goal-Aware Prediction: Learning to Model what Matters
Authors: Suraj Nair, Silvio Savarese, Chelsea Finn
Contact: surajn@stanford.edu
Links: Paper
Keywords: reinforcement learning, visual planning, robotics
Graph-based, Self-Supervised Program Repair from Diagnostic Feedback
Authors: Michihiro Yasunaga, Percy Liang
Contact: myasu@cs.stanford.edu
Links: Paper | Blog Post | Video
Keywords: program repair, program synthesis, self-supervision, pre-training, graph
Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions
Authors: Omer Gottesman, Joseph Futoma, Yao Liu, Sonali Parbhoo, Leo Anthony Celi, Emma Brunskill, Finale Doshi-Velez
Contact: gottesman@fas.harvard.edu
Links: Paper
Keywords: reinforcement learning, off-policy evaluation, interpretability
Learning Near Optimal Policies with Low Inherent Bellman Error
Authors: Andrea Zanette, Alessandro Lazaric, Mykel Kochenderfer, Emma Brunskill
Contact: zanette@stanford.edu
Links: Paper
Keywords: reinforcement learning, exploration, function approximation
Maximum Likelihood With Bias-Corrected Calibration is Hard-To-Beat at Label Shift Domain Adaptation
Authors: Amr Alexandari*, Anshul Kundaje†, Avanti Shrikumar*† (*co-first †co-corresponding)
Contact: avanti@cs.stanford.edu, amr.alexandari@gmail.com, akundaje@stanford.edu
Links: Paper | Blog Post | Video
Keywords: domain adaptation, label shift, calibration, maximum likelihood
NGBoost: Natural Gradient Boosting for Probabilistic Prediction
Authors: Tony Duan*, Anand Avati*, Daisy Yi Ding, Sanjay Basu, Andrew Ng, Alejandro Schuler
Contact: avati@cs.stanford.edu
Links: Paper
Keywords: gradient boosting, uncertainty estimation, natural gradient
On the Expressivity of Neural Networks for Deep Reinforcement Learning
Authors: Kefan Dong, Yuping Luo, Tianhe Yu, Chelsea Finn, Tengyu Ma
Contact: kefandong@gmail.com
Links: Paper
Keywords: reinforcement learning
On the Generalization Effects of Linear Transformations in Data Augmentation
Authors: Sen Wu, Hongyang Zhang, Gregory Valiant, Christopher Ré
Contact: senwu@cs.stanford.edu
Links: Paper | Blog Post | Video
Keywords: data augmentation, generalization
Predictive Coding for Locally-Linear Control
Authors: Rui Shu*, Tung Nguyen*, Yinlam Chow, Tuan Pham, Khoat Than, Mohammad Ghavamzadeh, Stefano Ermon, Hung Bui
Contact: ruishu@stanford.edu
Links: Paper | Video
Keywords: representation learning, information theory, generative models, planning, control
Robustness to Spurious Correlations via Human Annotations
Authors: Megha Srivastava, Tatsunori Hashimoto, Percy Liang
Contact: megha@cs.stanford.edu
Links: Paper
Keywords: robustness, distribution shift, crowdsourcing, human-in-the-loop
Sample Amplification: Increasing Dataset Size even when Learning is Impossible
Authors: Brian Axelrod, Shivam Garg, Vatsal Sharan, Gregory Valiant
Contact: shivamgarg@stanford.edu
Links: Paper | Video
Keywords: learning theory, sample amplification, generative models
Scalable Identification of Partially Observed Systems with Certainty-Equivalent EM
Authors: Kunal Menda, Jean de Becdelièvre, Jayesh K. Gupta, Ilan Kroo, Mykel J. Kochenderfer, Zachary Manchester
Contact: kmenda@stanford.edu
Links: Paper | Video
Keywords: system identification; time series and sequence models
The Implicit and Explicit Regularization Effects of Dropout
Authors: Colin Wei, Sham Kakade, Tengyu Ma
Contact: colinwei@stanford.edu
Links: Paper
Keywords: dropout, deep learning theory, implicit regularization
Training Deep Energy-Based Models with f-Divergence Minimization
Authors: Lantao Yu, Yang Song, Jiaming Song, Stefano Ermon
Contact: lantaoyu@cs.stanford.edu
Links: Paper
Keywords: energy-based models; f-divergences; deep generative models
Two Routes to Scalable Credit Assignment without Weight Symmetry
Authors: Daniel Kunin*, Aran Nayebi*, Javier Sagastuy-Brena*, Surya Ganguli, Jonathan M. Bloom, Daniel L. K. Yamins
Contact: jvrsgsty@stanford.edu
Links: Paper | Video
Keywords: learning rules, computational neuroscience, machine learning
Understanding Self-Training for Gradual Domain Adaptation
Authors: Ananya Kumar, Tengyu Ma, Percy Liang
Contact: ananya@cs.stanford.edu
Links: Paper | Video
Keywords: domain adaptation, self-training, semi-supervised learning
Understanding and Mitigating the Tradeoff between Robustness and Accuracy
Authors: Aditi Raghunathan*, Sang Michael Xie*, Fanny Yang, John C. Duchi, Percy Liang
Contact: aditir@stanford.edu, xie@cs.stanford.edu
Links: Paper | Video
Keywords: adversarial examples, adversarial training, robustness, accuracy, tradeoff, robust self-training
Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling
Authors: Yao Liu, Pierre-Luc Bacon, Emma Brunskill
Contact: yaoliu@stanford.edu
Links: Paper
Keywords: reinforcement learning, off-policy evaluation, importance sampling
Visual Grounding of Learned Physical Models
Authors: Yunzhu Li, Toru Lin*, Kexin Yi*, Daniel M. Bear, Daniel L. K. Yamins, Jiajun Wu, Joshua B. Tenenbaum, Antonio Torralba
Contact: liyunzhu@mit.edu
Links: Paper | Video
Keywords: intuitive physics, visual grounding, physical reasoning
Learning to Simulate Complex Physics with Graph Networks
Authors: Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, Peter W. Battaglia
Contact: rexying@stanford.edu
Links: Paper
Keywords: simulation, graph neural networks
Coresets for Data-Efficient Training of Machine Learning Models
Authors: Baharan Mirzasoleiman, Jeff Bilmes, Jure Leskovec
Contact: baharanm@cs.stanford.edu
Links: Paper | Video
Keywords: Coresets, Data-efficient training, Submodular optimization, Incremental gradient methods
Which Tasks Should be Learned Together in Multi-Task Learning
Authors: Trevor Standley, Amir Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, Silvio Savarese
Contact: tstand@cs.stanford.edu
Links: Paper | Video
Keywords: machine learning, multi-task learning, computer vision
Accelerated Message Passing for Entropy-Regularized MAP Inference
Contact: jnl@stanford.edu
Links: Paper
Keywords: graphical models, map inference, message passing, optimization
We look forward to seeing you at ICML 2020!