Does AutoML work for diverse tasks?

Over the past decade, machine learning (ML) has grown rapidly in both popularity and complexity. Driven by advances in deep neural networks, ML is now being applied far beyond its traditional domains like computer vision and text processing, with applications in areas as diverse as solving partial differential equations (PDEs), tracking credit card fraud, and predicting medical conditions from gene sequences. However, progress in such areas has often required expert-driven development of complex neural network architectures, expensive hyperparameter tuning, or both. Given that such resource intensive iteration is expensive and inaccessible to most practitioners, AutoML has emerged with an overarching goal of enabling any team of ML developers to deploy ML on arbitrary new tasks. Here we ask about the current status of AutoML, namely: can available AutoML tools quickly and painlessly attain near-expert performance on diverse learning tasks?

This blog post is dedicated to two recent but related efforts that measure the field’s current effectiveness at achieving this goal: NAS-Bench-360 and the AutoML Decathlon. The first is a benchmark suite focusing on the burgeoning field of neural architecture search (NAS), which seeks to automate the development of neural network models. With evaluations on ten diverse tasks—including a precomputed tabular benchmark on three of them—NAS-Bench-360 is the first NAS testbed that goes beyond traditional AI domains such as vision, text, and audio signals. Specifically, the 10 tasks vary in their domain (including image, finance time series, audio, and natural sciences), problem type (including regression, single-label, and multi-label classification), and scale (ranging from several thousands to hundreds of thousands of observations).

The second is a NeurIPS 2022 competition (which we are soft-launching today!) that builds on our NAS-Bench-360 work yet has a broader vision of understanding what is truly the best approach for a practitioner to take when faced with a modern ML problem.  During the public development phase of the competition we will release a set of diverse tasks that will be representative of (but distinct from) the final set of test tasks on which evaluation will be performed. Unlike most past competitions in the AutoML community, competitors in the AutoML Decathlon are free (and in fact encouraged) to consider a wide range of approaches from traditional hyperparameter optimization and ensembling methods to modern techniques such as NAS and large-scale transfer learning.

You can learn more about getting involved with either of these efforts at the bottom of this post.

NAS-Bench-360: A NAS Benchmark for diverse tasks

NAS-Bench-360 is a benchmark suite consisting of ten ML tasks that we developed jointly with Renbo Tu, Nick Roberts, Junhong Shen, and Fred Sala. These tasks represent a diverse set of signals, including various kinds of imaging sources, simulation data, genomic data, and more. At the same time, we constrain all tasks to be amenable to modern NAS search spaces, i.e. we do not include tabular or graph-based data, thus allowing for the application of most NAS methods. Our evaluation on NAS-Bench-360 is thus a robustness test that checks whether the massive amount of largely computer vision-driven progress in the field of NAS is actually indicative of wider success of AutoML across a variety of applications, data types, and tasks. More importantly, the benchmark will serve as a useful tool to develop and evaluate new, better methods for NAS.

So can AutoML tools—specifically NAS methods—quickly and painlessly attain near-expert performance on NAS-Bench-360? In positive news, searching over a large search space such as DARTS using a state-of-the-art algorithm such as GAEA does yield models that outperform available expert architectures on half of the tasks, in addition to consistently beating perennial Kaggle favorite XGBoost and a recent attempt at a general-purpose architecture, Perceiver IO. On the other hand it fails catastrophically on several tasks, doing little better than a simple baseline, namely a tuned Wide ResNet (Figure 1, left panel). Indeed, despite being developed on CIFAR-10 it does surprisingly poorly on 2D classification tasks from the medical and audio domains. Furthermore, in a resource-constrained setting where AutoML methods are not given much more time than running a single architecture, the leading NAS method DenseNAS does worse than an untuned Wide ResNet (Figure 1, right panel).

Figure 2:  Whereas high-performance architectures on vision datasets often perform well on other vision datasets (left), we use NAS-Bench-360 to show that this does not translate to diverse tasks (right).

Our evaluation of modern NAS methods on NAS-Bench-360 demonstrates the need for such a benchmark and a lack of robustness in the field. NAS-Bench-360 is also useful for understanding past and future search spaces and algorithms, specifically whether current beliefs about NAS extend to diverse tasks. For example, Figure 2 shows that high-performing architectures transfer well between vision tasks—a quality used extensively in NAS research—but not between diverse tasks. Other examples of scientific uses of NAS-Bench-360—such as one investigating a recent paper on operation redundancy—are provided in our paper and in a recent ICLR 2022 blog post on zero-cost proxies. We also expect NAS-Bench-360 to be used for the development of new NAS methods; to further this, for two of the datasets we provide precomputed models for all architectures in the NAS-Bench-201 search space; together with existing CIFAR-100 precompute results this means three NAS-Bench-360 datasets have precomputed tabular benchmarks to accelerate search algorithm development. 

The AutoML Decathlon: A competition focused on diverse tasks and methods

Our goal in releasing NAS-Bench-360 is to spur the development of NAS methods that work well on diverse tasks. However, given the mixed performance of NAS on this benchmark, there remains a question of whether automatic architecture design should even be the focus of AutoML research more broadly. Building on our efforts from NAS-Bench-360, a group of researchers at CMU, Hewlett Packard Enterprise (HPE), Wisconsin-Madison, and Morgan Stanley are organizing the AutoML Decathlon competition at NeurIPS 2022 precisely to ask the following broader question: what automated technique(s) are best for diverse tasks?

This competition is designed to address two gaps between research and practice:

  1. Lack of task diversity. The field of NAS is no exception here, as the vast majority of recent AutoML benchmarking and competition efforts have focused on computer vision or other well studied tasks in speech and language processing. Evaluating AutoML methods on such well-studied tasks does not give a good indication of their utility on more far-afield applications.
  2. Siloed methodological development. Many developments in AutoML narrowly focus on particular techniques rather than the downstream benefits to the end user. A practitioner with a specific ML task ultimately cares about the quality of the resulting model (in terms of accuracy and other non-accuracy metrics), as opposed to the underlying technical details of the procedure yielding this model, e.g., whether the model is the result of a weight-sharing NAS method, a fine-tuned large model,  a more classical non-deep learning AutoML technique, or some other automated procedure.

By designing our competition in a practitioner-centric fashion and accounting for the two aforementioned gaps, our competition aims to spur innovation in AutoML with results that are directly transferable to ML practitioners. We envision that the results of our competition will provide novel empirical insights into several open practical and scientific questions, including:

  • Given the growing methodological diversity of (Auto)ML approaches, what methods should I consider as a practitioner in 2022?  
  • How do leading NAS methods compare to the increasingly popular pre-training/fine-tuning paradigm?
  • How do either of these more modern approaches compare to classical AutoML approaches or to standard baselines such as XGBoost or a tuned ResNet?
  • Should I consider using any AutoML procedure given that I’m working on a specific scientific, technological, or industrial problem that seemingly differs drastically from well-studied tasks in computer vision and NLP?  
  • Given a reasonable computational budget, can any AutoML approach (whether classical or more modern) consistently outperform bespoke models that were hand-crafted by either domain experts and/or ML experts?
Figure 3: Summary of the AutoML Decathlon competition timeline. To ensure efficiency, the evaluation will be conducted under a fixed computational budget. To ensure robustness, the performance profile methodology described above will be used for determining the winners.

We note that while AutoML is not a new research area, we view our competition as being particularly timely given (1) rapid growth of ML task diversity, (2) progress in ML model development, and (3) acceleration in the scale of both datasets and available compute resources. Indeed, recent progress along these three dimensions has led us to make remarkably different design choices from those of past competitions like the AutoDL competition, which was launched just three years ago. For instance, we work with bigger datasets, allow larger computational budgets, consider an expanding set of applications, and perform more robust evaluations based on performance profiles. Relatedly, while over the past three years we’ve witnessed significant progress in NAS and the emergence of the pretrain/fine-tuning paradigm in various settings, neither of these types of approaches featured prominently in the AutoDL competition (or other past competitions). In contrast, we hypothesize that these approaches will be more prominently featured in the AutoML Decathlon.

The AutoML Decathlon is built around a set of 20 datasets that we have curated which represent a broad spectrum of practical applications in scientific, technological, and industrial domains. As explained in Figure 3, ten of the tasks will be used for development and an additional ten tasks will be used for final evaluation and revealed only after the competition. We will provide computational resources to participants as needed, with funding provided by Morgan Stanley. The results of our performance-profile based evaluation will determine monetary prizes, including a $15K first prize, with sponsorship provided by HPE.

Getting Involved: Using NAS-Bench-360 and competing in the AutoML Decathlon

Our goal with both NAS-Bench360 and the AutoML Decathlon is to encourage community participation in evaluating what AutoML is already good at, what areas need improving, and what directions seem most promising for future work. We hope that these rigorous benchmarking activities will help the field more rapidly move towards a truly democratized ML toolkit that can be used by researchers and practitioners alike.

To learn more, check out the following links:

  • NAS-Bench360: You can download the ten datasets on the website, and learn more about the benchmark and our various insights from our paper.
  • AutoML Decathlon: The competition officially starts next week and runs through mid October, but we are soft-launching today to spread the word. You can learn more about the details at the competition website and the associated CodaLab website. 

Also, stay tuned for a follow up blog post where our collaborator Junhong Shen describes our recent algorithmic NAS work targeting diverse tasks.

Read More

Deep Attentive Variational Inference

Figure 1: Overview of a local variational layer (left) and an attentive variational layer (right) proposed in this post.

Generative models are a class of machine learning models that are able to generate novel data samples such as fictional celebrity faces, digital artwork, or scenery images. Currently, the most powerful generative models are deep probabilistic models. This class of models uses deep neural networks to express statistical hypotheses about the way in which the data have been generated. Latent variable models augment the set of the observed data with latent (unobserved) information in order to better characterize the procedure that generates the data of interest.

In spite of the successful results, deep generative modeling remains one of the most complex and expensive tasks in AI. Recent models rely on increased architectural depth to improve performance. However, as we show in our paper [1], the predictive gains diminish as depth increases. Keeping a Green-AI perspective in mind when designing such models could lead to their wider adoption in describing large-scale, complex phenomena.

A quick review of Deep Variational AutoEncoders

Latent variable models augment the set of the observed variables with auxiliary latent variables. They are characterized by a posterior distribution over the latent variables, one which is generally intractable and typically approximated by closed-form alternatives. Moreover, they provide an explicit parametric characterization of the joint distribution over the expanded random variable space. The generative and the inference portions of such a model are jointly trained. The Variational AutoEncoder (VAE) belongs to this model category. Figure 2 provides an overview of a VAE.

Figure 2: A Variational AutoEncoder consists of a generative model and an inference model. The generative model, or decoder, is defined by a joint distribution of latent and observed variables. The inference model, or encoder, approximates the true posterior of the latent variables given the observations. The two parts are jointly trained.

VAEs are trained by maximizing the Evidence Lower BOund (ELBO) which is a tractable, lower bound of the marginal log-likelihood:

[text{log } p(x) ge mathbb{E}_{q(zmid x)}large[text{log } p(xmid z)large] – D_{KL} large(q(zmid x) mid mid p(z)large). ]
Figure 3: Overview of a hierarchical VAE.

The most powerful VAEs introduce large latent spaces (z) that are organized in blocks such that (z = {z_1, z_2, dots, z_L}), with each block being generated by a layer in a hierarchy. Figure 3 illustrates a typical architecture of a hierarchical VAE. Most state-of-the-art VAEs correspond to a fully connected probabilistic graphical model. More formally, the prior distribution follows the factorization:

[ p(z) = p(z_1) prod_{l=2}^L p(z_l mid z_{<l}). text{ (1)}]

In words, (z_l) depends on all previous latent factors (z_{<l}). Similarly, the posterior distribution is given by:

[q(zmid x) = q(z_1 mid x) prod_{l=2}^L q(z_l mid x, z_{<l}). text{ (2)}]

The long-range conditional dependencies are implicitly enforced via deterministic features that are mixed with the latent variables and are propagated through the hierarchy. Concretely, each layer (l) is responsible for providing the next layer with a latent sample (z_l) along with context information (c_l):

[c_l leftarrow T_l left (z_{l-1} oplus c_{l-1} right). text{ (3)}]

In a convolutional VAE, (T_l) is a non-linear transformation implemented by ResNet blocks as shown in Figure 1. The operator (oplus) combines two branches in the network. Due to its recursive definition, (c_l) is a function of (z_{<l}).

Deep Variational AutoEncoders are “overthinking”

Recent models such as NVAE [2], rely on the increased depth to improve performance and deliver results comparable to that of purely generative, autoregressive models while permitting fast sampling that requires a single network evaluation. However, as we show in our paper and Table 1, the predictive gains diminish as depth increases. After some point, even if we double the number of layers, we can only realize a slight increase in the marginal likelihood.

Depth (L) bits/ dim (
downarrow)
(Delta(cdot) % )
2 3.5
4 3.26 -6.8
8 3.06 -6.1
16 2.96 -3.2
30 2.91 -1.7
Table 1: Deep VAEs suffer from diminishing returns. ( -text{log } p(x) ) in bits per dimension and relative decrease for varying number of variational layers (L).

We argue that this may be because the effect of the latent variables of earlier layers diminishes as the context feature (c_l) traverses the hierarchy and is updated with latent information from subsequent layers. In turn, this means that in practice the network may no longer respect the factorization of the variational distributions of Equations (1) and (2), leading to sub-optimal performance. Formally, large portions of early blocks (z_l) collapse to their prior counterparts, and therefore, they no longer contribute to inference.

This phenomenon can be attributed to the local connectivity of the layers in the hierarchy, as shown in Figure 4.a. In fact, a layer is directly connected only with the adjacent layers in a deep VAE, limiting long-range conditional dependencies between (z_l) and (z_{<<l}) as depth increases.

The flexibility of the prior (p(z)) and the posterior (q(z mid x)) can be improved by designing more informative representations for the conditioning factors of the conditional distributions (p(z_l mid z_{<l})) and (q(z_l mid x, z_{<l})). This can be accomplished by designing a hierarchy of densely connected stochastic layers that dynamically learn to attend to latent and observed information most critical to inference. A high-level description of this idea is illustrated in Figure 4.b.

Figure 4: (a) Locally Connected Variational Layer.
(b) Strongly Connected Variational Layer.

In the following sections, we describe the technical tool that allows our model to realize the strong couplings presented in Figure 4.b.

Problem: Handling long sequences of large 3D tensors

In deep convolutional architectures, we usually need to handle long sequences of large 3D context tensors. A typical sequence is shown in Figure 5. Constructing effectively strong couplings between current and previous layers in a deep architecture can be formulated as:

Figure 5: Sequence of 3D tensors in a convolutional architecture.

Problem definition: Given a sequence (c_{<l}={c_m}_{m=1}^{l-1}) of (l-1) contexts (c_m) with (c_min mathbb{R}^{H times W times C}), we need to construct a single context (hat{c}_linmathbb{R}^{H times W times C}) that summarizes information in (c_{<l}) that is most critical to the task.

In our framework, the task of interest is the construction of posterior and prior beliefs. Equivalently, contexts ( hat{c}^q_l) and ( hat{c}^p_l) represent the conditioning factor of the posterior and prior distribution of layer (l).

There are two ways to view a long sequence of (l-1) large (H times W times C)-dimensional contexts:

  • Inter-Layer couplings: As (H times W) independent pixel sequences of (C-)dimensional features of length (l-1). One such sequence is highlighted in Figure 5.
  • Intra-Layer couplings: As (l-1) independent pixel sequences of (C-)dimensional features of length (H times W).

This observation leads to a factorized attention scheme that identifies important long-range, inter-layer, and intra-layer dependencies separately. Such decomposition of large and long pixel sequences leads to significantly less compute.

Inter-Layer couplings: Depth-wise Attention

The network relies on a depth-wise attention scheme to discover inter-layer dependencies. The task is characterized by a query feature (s). During this phase, the pixel sequences correspond to instances of a pixel at the previous layers in the architecture. They are processed concurrently and independently from the rest. The contexts are represented by key features (k) of a lower dimension. The final context is computed as a weighted sum of the contexts according to an attention distribution. The mechanism is explained in Figure 6.

Figure 6: Explanation of depth-wise attention in convolutional architectures.

The layers in the variational hierarchy are augmented with two depth-wise attention blocks for constructing the context of the prior and posterior distribution. Figure 1 displays the computational block of an attentive variational layer. As shown in Figure 6, each layer also needs to emit attention-relevant features: the keys (k_l) and queries (s_l), along with the contexts (c_l). Equation (3) is revised for the attention-driven path in the decoder such that the context, its key, and the query are jointly learned:

[ [c_l, s_l, k_l] leftarrow T_l left (z_{l-1} oplus c_{l-1} right). text{ (4)}]

A formal description along with normalization schemes are provided in our paper.

Intra-Layer couplings: Non-local blocks

Intra-layer dependencies can be leveraged by interleaving non-local blocks [3] with the convolutions in the ResNet blocks of the architecture, also shown in Figure 1.

Experiments

We evaluate Attentive VAEs on several public benchmark datasets of both binary and natural images. In Table 2, we show performance and training time of state-of-the-art, deep VAEs on CIFAR-10. CIFAR-10 is a 32×32 natural images dataset. Attentive VAEs achieve state-of-the-art likelihoods compared to other deep VAEs. More importantly, they do so with significantly fewer layers. Fewer layers mean decreased training and sampling time.

Model Layers Training Time
(GPU hours)
( – log p(x) )
(bits/dim)
Attentive VAE, 400 epochs [1] 16 272 2.82
Attentive VAE, 500 epochs [1] 16 336 2.81
Attentive VAE, 900 epochs [1] 16 608 2.79
NVAE [2] 30 440 2.91
Very Deep VAE [4] 45 288 2.87
Table 2: Comparison of performance and computational requirements of deep state-of-the art VAE models.

In Figures 8 and 9, we show reconstructed and novel images generated by attentive VAE.

Figure 8: Original & Reconstructed CIFAR-10 images.
Figure 9: Uncurated fantasy CIFAR-10 images.

The reason behind this improvement is that the attention-driven, long-range connections between layers lead to better utilization of the latent space. In Figure 7, we visualize the KL divergence per layer during training. As we see in (b), the KL penalty is evenly distributed among layers. In contrast, as shown in (a), the upper layers in a local, deep VAE are significantly less active. This confirms our hypothesis that the fully-connected factorizations of Equations (1) and (2) may not be supported by local models. In contrast, an attentive VAE dynamically prioritizes statistical dependencies between latent variables most critical to inference.

Figure 7: KL visualization in (a) a local
(b) and an attentive VAE.

Finally, attention-guided VAEs close the gap in the performance between variational models and expensive, autoregressive models. Comprehensive comparisons, quantitative and qualitative results are provided in our paper.

Conclusion

The expressivity of current deep probabilistic models can be improved by selectively prioritizing statistical dependencies between latent variables that are potentially distant from each other. Attention mechanisms can be leveraged to build more expressive variational distributions in deep probabilistic models by explicitly modeling both nearby and distant interactions in the latent space. Attentive inference reduces computational footprint by alleviating the need for deep hierarchies.

Acknowledgments

A special word of thanks is due to Christos Louizos for helpful pointers to prior works on VAEs, Katerina Fragkiadaki for helpful discussions on generative models and attention mechanisms for computer vision tasks, Andrej Risteski for insightful conversations on approximate inference, and Jeremy Cohen for his remarks on a late draft of this work. Moreover, we are very grateful to Radium Cloud for granting us access to computing infrastructure that enabled us to scale up our experiments. We also thank the International Society for Bayesian Analysis (ISBA) for the travel grant and the invitation to present our work as a contributed talk at the 2022 ISBA World Meeting. This material is based upon work supported by the Defense Advanced Research Projects Agency under award number FA8750-17-2-0130, and by the National Science Foundation under grant number 2038612. Moreover, the first author acknowledges support from the Alexander Onassis Foundation and from A. G. Leventis Foundation. The second author is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1745016 and DGE2140739.

DISCLAIMER: All opinions expressed in this post are those of the author and do not represent the views of CMU.

References

[1] Apostolopoulou I, Char I, Rosenfeld E, Dubrawski A. Deep Attentive Variational Inference. InInternational Conference on Learning Representations 2021 Sep 29.

[2] Vahdat A, Kautz J. Nvae: A deep hierarchical variational autoencoder. Advances in Neural Information Processing Systems. 2020;33:19667-79.

[3] Wang X, Girshick R, Gupta A, He K. Non-local neural networks. InProceedings of the IEEE conference on computer vision and pattern recognition 2018 (pp. 7794-7803).

[4] Child R. Very deep vaes generalize autoregressive models and can outperform them on images. arXiv preprint arXiv:2011.10650. 2020 Nov 20.

Want to learn more?

Check out:

Read More

An Experimental Design Perspective on Model-Based Reinforcement Learning

$$ newcommand{statespace}{mathcal{S}} newcommand{actionspace}{mathcal{A}} newcommand{Rbb}{mathbb{R}} newcommand{Ebb}{mathbb{E}} newcommand{Hbb}{mathbb{H}} DeclareMathOperator{EIG}{EIG}$$

Reinforcement learning (RL) has achieved astonishing successes in domains where the environment is easy to simulate. For example, in games like Go or those in the Atari library, agents can play millions of games in the course of days to explore the environment and find superhuman policies [1]. However, transfer of these advances to broader real-world applications is challenging because the cost of exploration in many important domains is high. For example, while RL-based solutions for controlling plasmas in the nuclear fusion power generation are promising, there is only one operating tokamak in the United States and its resources are in excessive demand. Even the most data-efficient RL algorithms typically take thousands of samples to solve even moderately complex problems [2, 3], which is infeasible in plasma control and many other applications.

In contrast to conventional machine learning settings where the data is given to the decision maker, an RL agent can choose data to learn from. A natural idea for reducing data requirements is to choose data wisely such that a smaller amount of data is sufficient to perform well on a task. In this post, we describe a practical implementation of this idea. Specifically, we offer an answer to the following question: “If we were to collect one additional datapoint from anywhere in the state-action space to best improve our solution to the task, which one would it be?”. This question is related to a more fundamental idea in the design of intelligent agents with limited resources: such agents should be able to understand what information about the world is the most useful to help them accomplish their task. We see this work as a small step towards this bigger goal.

In our recent ICLR paper, An Experimental Design Perspective on Model-Based Reinforcement Learning, we derive an acquisition function that guides an agent in choosing data for the most successful learning. In doing this, we draw a connection between model-based reinforcement learning and Bayesian optimal experimental design (BOED) and evaluate data prospectively in the context of the task reward function and the current uncertainty about the dynamics. Our approach can be efficiently implemented under a conventional assumption of a Gaussian Process (GP) prior on the dynamics function. Typically in BOED, acquisition functions are used to sequentially design experiments that are maximally informative about some quantity of interest by repeatedly choosing the maximizer, running the experiment, and recomputing the acquisition function with the new data. Generalizing this procedure, we propose a simple algorithm that is able to solve a wide variety of control tasks, often using orders of magnitude less data than competitor methods to reach similar asymptotic performance.

Preliminaries

In this work, we consider a RL agent that operates in an environment with unknown dynamics. This is a general RL model for decision-making problems in which the agent starts without any knowledge on how their actions impact the world. The agent then can query different state-action pairs to explore the environment and find a behavior policy that results in the best reward. For example, in the plasma control task, the states are various physical configurations of the plasma and possible actions include injecting power and changing the current. The agent does not have prior knowledge on how its actions impact the conditions of the plasma. Thus, it needs to quickly explore the space to ensure efficient and safe operation of the physical system—a requirement captured in the corresponding reward function.

Given that each observation of a state-action pair is costly, an agent needs to query as few state-action pairs as possible and in this work we develop an algorithm that informs the agent about which queries to make.

We operate under a setting we call transition query reinforcement learning (TQRL). In this setting, an agent can can sequentially query the dynamics at arbitrary states and actions to learn a good policy, essentially teleporting between states as it wishes. Traditionally, in the rollout setting, agents must simply choose actions and execute entire episodes to collect data. TQRL therefore is a slightly more informative form of access to the real environment.

Precise Definition of the Setup

More precisely: we address finite-horizon discrete time Markov Decision Processes (MDPs), which consist of a tuple (langle statespace, actionspace, T, r, p_0, Hrangle) where:

  • (statespace) is the state space
  • (actionspace) is the action space
  • (T) (dynamics) is the stochastic transition function that maps (state, action) pairs (statespace times actionspace) to a probability distribution over states (statespace)
  • (r: statespacetimesactionspace to Rbb) is a reward function
  • (p_0) is a start distribution over states (statespace)
  • (H) is an integer-valued horizon, that is, the number of steps the agent will perform in the environment

We assume that all of these parameters are known besides dynamics (T). The key quantity that defines the behaviour of the agent is its policy (pi: statespace to actionspace) that tells the agent what action to take in a given state. Thus, the overall goal of the agent is to find a policy that maximizes the cumulative reward over the agent’s trajectory (tau sim p(taumid pi, T) ) followed by the agent. Formally, a trajectory is simply a sequence of (state, action) pairs (tau = [s_0, a_0, dots, a_{H -1}, s_H]), where (a_i = pi(s_i)) is an action taken by the agent at step (i) and (s_i sim T(s_{i-1}, a_{i-1})) is a state in which agent was at time (i). Denoting the cumulative reward over trajectory (tau) as (R(tau)), the agent needs to solve the following optimization problem:

$$max_pi Ebb_{tau sim p(taumid pi, T)}left[R(tau)right].$$

We call an optimal policy for a given dynamics (T) as (pi^*_T). As we know the other parts of the MDP, to solve the optimization problem, we need to learn a model for the transition function (hat{T}).

Main Idea

Inspired by BOED and Bayesian algorithm execution (BAX) [11], we use ideas from information theory to motivate our method to effectively choose data points. Our goal is to sequentially choose queries ((s, a)) such that our agent quickly finds a good policy. We observe that to perform the task successfully, we do not need to approximate the optimal policy (pi^*) everywhere in the state space. Indeed, there could be regions of the state space that are unlikely to be visited by the optimal policy. Thus, we only need to approximate the optimal policy in the regions of the state space that are visited by the optimal policy.

Therefore, we choose to learn about (tau^*)—the optimal trajectory governed by the optimal policy (pi^*). This objective only requires data about the areas we believe (pi^*) will visit as it solves the task, so intuitively we should not “waste” samples on irrelevant regions in the state-action space. In plasma control, this idea might look like designing experiments in certain areas of the state and action space that will teach us the most about controlling plasma in the target regimes we need to maintain fusion.

We thus define our acquisition function to be the expected information gain about (tau^*) from sampling a point (T(s, a)) given a dataset (D): $$EIG_{tau^*}(s, a) = Hbb[tau^* mid D] – Ebb_{s’sim T(s, a)}left[Hbb[tau^*mid Dcup {(s, a, s’)}]right].$$ Intuitively, this quantity measures how much the additional data is expected to reduce the uncertainty (here given by Shannon entropy, denoted (Hbb)) about the optimal trajectory.

At a high level, following methods related to the InfoBAX algorithm [11], we can approximate this acquisition function by a three-step procedure:

  • First, we sample many possible dynamics functions from our posterior (e.g., functions that describe plasma evolution).
  • Second, we find optimal trajectories on each of the sampled dynamics functions without taking new data from the environment, as we can simulate controls on these dynamics.
  • Third, we compute predictive entropies of our model at ((s, a)) and of our model with additional data taken from each optimal trajectory. We can then subtract the trajectory-conditioned entropy from the original entropy.

This final step allows us to estimate the mutual information between (T(s, amid D)) and (tau^*), which is precisely the quantity we want. We give a more precise description of this below.

Given a task of regulating plasma to the goal conditions (green dot), we compute posterior samples of the optimal trajectory (paths in color). We can then estimate the point (red circle) with maximal mutual information with these optimal trajectories and query the dynamics at that point.

Computing (EIG_{tau^*}) via posterior function sampling

More formally: as (tau^*) is a high-dimensional object that implicitly assumes access to an optimal decision making policy, it is not obvious that the entropies involved in computing it will be easy to estimate. However, by making two additional assumptions and leveraging properties of mutual information, we can derive a practical method for estimating (EIG_{tau^*}). In particular, we need to assume that:

  • The dynamics (T) are drawn from a GP prior (P(T)), a fairly mild assumption since GPs are universal approximators [10].
  • (pi_T approx pi^*) for an MDP with known rewards and transition function (T), i.e., that a model-predictive control (MPC) policy using known dynamics will be close to optimal on those dynamics. This is not true in all settings and we investigate how crucial this assumption is to the performance of the algorithm in our experiments section.

We know from information theory that $$EIG_{tau^*}(s,a) = I(tau^*; T(s, a)) = Hbb[T(s, a)mid D] – Ebb_{tau^*sim P(tau^* mid D)}left[Hbb[T(s, a)mid Dcup tau^*]right],$$ where (I) refers to the mutual information. This expression is much easier to deal with, given a GP. We can use the fact that given a GP prior the marginal posterior at any point in the domain is a Gaussian in closed form, to exactly compute the left term (Hbb[T(s, a)mid D]). We compute the right term (Ebb_{tau^sim P(tau^ mid D)}left[Hbb[T(s, a)mid Dcup tau^*]right]) via a Monte Carlo approximation, sampling (T’ sim P(T’mid D)) (doable efficiently due to [6]) and then sampling trajectories (tausim P(tau mid T’, pi_{T’})) by executing MPC using (T’) as both the dynamics model used for planning and the dynamics function of the MDP used to sample transitions in (tau). As (tau) is a sequence of state-action transitions, it is essentially made of more data for our estimate of the transition model. So it is straightforward to compute the model posterior (P(T(s, a)mid Dcuptau^*)) (which again must be Gaussian) and read off the entropy of the prediction. The full Monte Carlo estimator is $$EIG_{tau^*}(s, a) approx Hbb[T(s, a)mid D] – frac{1}{n}sum_{i in [n]}Hbb[T(s, a)mid Dcup tau_i]$$ for (tau_i) sampled as described above.

In summary, we can estimate our acquisition function via the following procedure, which is subject to the two assumptions listed above:

  1. Sample many functions (T_isim P(Tmid D))
  2. Sample trajectories (tau_i sim P(taumid T_i, pi_{T_i})) by executing the MPC policy (pi_{T_i}) on the dynamics (T_i).
  3. Compute the entropies (Hbb[T(s, a)mid D]) and (Hbb[T(s, a)mid Dcup tau_i]), for all (i), using standard GP techniques.
  4. Compute the acquisition function using our Monte Carlo estimator.

Inspired by the main ideas of BOED and active learning, we give a simple greedy procedure which we call BARL (Bayesian Active Reinforcement Learning) for using our acquisition function to acquire data given some initial dataset:

  1. Compute (EIG_{tau^*}(s, a)) given the dataset for a large random set of state-action pairs. Samples of (tau^*) can be reused between these points.
  2. Sample (s’ sim T(s, a)) for the (s, a) that was found to maximize the acquisition function and add (s, a, s’) to the dataset.
  3. Repeat steps 1-2 until the query budget is exhausted. The evaluation policy is simply MPC on the GP posterior mean.

Does BARL reduce the data requirements of RL?

We evaluate BARL on the TQRL setting in 5 environments which span a variety of reward function types, dimensionalities, and amounts of required data. In this evaluation, we estimate the minimum amount of data an algorithm needs to learn a controller. The evaluation environments include the standard underactuated pendulum swing-up task, a cartpole swing-up task, the standard 2-DOF reacher task, a navigation problem where the agent must find a path across pools of lava, and a simulated nuclear fusion control problem where the agent is tasked with modulating the power injected into the plasma to achieve a target pressure.

To assess the performance of BARL in solving MDPs quickly, we assembled a group of reinforcement learning algorithms that represent the state of the art in solving continuous MDPs. We compare against model-based algorithms PILCO [7], PETS [2], model-predictive control with a GP (MPC), and uncertainty sampling with a GP ((EIG_T)), as well as model-free algorithms SAC [3], TD3 [8], and PPO [9]. Besides the uncertainty sampling (which operates in the TQRL setting and is directly comparable to BARL), these methods rely on the rollout setting for RL and are somewhat disadvantaged relative to BARL.

BARL clearly outperforms each of the comparison methods in nearly every problem in data efficiency. We see that simpler methods like (EIG_T) and MPC perform well on lower-dimensional problems like Lava Path, Pendulum, and Beta Tracking, but struggle with the higher-dimensional Reacher Problem. Model-free methods like SAC, TD3, and PPO are notably sample-hungry.

Sample Complexity: Median number of samples across 5 seeds required to reach the performance of MPC on the ground truth dynamics, averaged across 5 trials on our control environments. We record N/A when the median run is unable to solve the problem by the end of training.

After further investigation, we also find that models that used data chosen by BARL are more accurate on the datapoints required to solve the problem and less accurate on a randomly chosen test set of points than models using data chosen via (EIG_T). This implies that BARL is choosing the ‘right data’. Since the same model is used in both of these methods, there will inevitably be areas of the input space where each of the methods performs better than the other. BARL performs better on the areas that are needed to solve the problem.

We compare BARL and an uncertainty sampling baseline (EIG_T) on three criteria. In the left chart, we plot control performance as queries are made. (pi_T) is the performance of MPC with a perfect model. In the middle, we plot modeling errors for BARL vs (EIG_T) on the points where the model is queried in order to plan actions. On the right, we plot modeling errors on a uniform test set. BARL models the dynamics well on the points required to plan the optimal actions (middle) while not learning the dynamics well in general (right). This focus on choosing relevant datapoints allows BARL to solve the task quickly (left).

Conclusion

We believe (EIG_{tau^*}) is an important first step towards agents that think proactively about the data that they will acquire in the future. Though we are encouraged by the strong performance we have seen so far, there is substantial future work to be done. In particular, we are currently working to extend BARL to the rollout setting by planning actions that will lead to maximum information in the future. We also aim to solve problems of scaling these ideas to the high-dimensional state and action spaces that are necessary for many real-world problems.

References

[1] Mastering the game of Go with deep neural networks and tree search, Silver et al, Nature 2016

[2] Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models, Chua et al, Neurips 2018

[3] Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor, Haarnoja et al, ICML 2018

[4] Cross-Entropy Randomized Motion Planning, Kobilarov et al, RSS 2008

[5] Sample-efficient Cross-Entropy Method for Real-time Planning, Pinneri et al, CoRL 2020

[6] Efficiently Sampling Functions from Gaussian Process Posteriors, Wilson et al, ICML 2020

[7] PILCO: A Model-Based and Data-Efficient Approach to Policy Search, Deisenroth & Rasmussen, ICML 2011

[8] Addressing Function Approximation Error in Actor-Critic Methods, Fujimoto et al, ICML 2018

[9] Proximal Policy Optimization Algorithms, Schulman et al, 2017

[10] Universal Kernels, Micchelli et al, JMLR 2006

[11] Bayesian Algorithm Execution: Estimating Computable Properties of Black-box Functions Using Mutual Information, Neiswanger et al, ICML 2021

Read More

Two Experiments in Peer Review: Posting Preprints and Citation Bias

There is increasing interest in computer science and elsewhere to understand and improve peer review (see here for an overview). With this motivation, we conducted two experiments regarding peer review which we summarize in this blog post.

Pros and Cons of Posting Preprints Online

Motivation.  Authors posting preprints online before review in double-blind peer-review is a widely debated issue of policy as well as authors’ personal choice. This choice is especially challenging for authors who perceive that they may be at a disadvantage in the review process if their identity is revealed. By posting their preprints online they stand to gain publicity but may lose benefits of double-blind review. We substantiate this debate quantitatively by addressing two research questions. 

Research questions. We study the following research questions:

  • (Q1) What fraction of reviewers deliberately search for their assigned paper on the Internet? 
  • (Q2) For preprints posted online, what is the relation between the papers’ visibility and the ranking of the authors’ affiliations?

Methods. We conduct survey-based experiments in the ICML 2021 and EC 2021 conferences. 

  • (Q1) We conduct an anonymized survey, where we ask each reviewer whether they deliberately searched online for any of their assigned papers.
  • (Q2) We consider the papers submitted to ICML or EC which were also available online before review. We survey relevant reviewers to assess the visibility of these papers: we ask them whether they have seen these papers online outside of the reviewing context. Finally, we compute a correlation between papers’ visibility and associated authors’ affiliations’ ranking.

Key findings. We report two main insights:

  • (Q1) More than a third of the respondents self-report searching online for their assigned paper in both ICML and EC.
  • (Q2) We find a weak positive correlation: preprints from better-ranked affiliations enjoy a higher visibility. In ICML, the correlation coefficient is 0.06 and is statistically significant; in EC, the correlation coefficient is 0.05 and is not statistically significant. In particular, papers associated with the top-10-ranked affiliations had a visibility of about 11% in ICML and 22% in EC, whereas the remaining papers had a visibility of 7% and 18% respectively.

Implications. Conference organizers looking to design blinding policies and authors looking to post preprints online can use our findings to gauge the tradeoffs involved.

Details. https://arxiv.org/pdf/2203.17259.pdf

Citation Bias

Motivation.  Many anecdotes suggest that including citations to the works of potential reviewers is a good (albeit unethical) way to increase the acceptance chances of a manuscript. 

Research question. Does the citation of a reviewer’s work in a submission cause the reviewer to be positively biased towards the submission, that is, cause a shift in reviewer’s evaluation that goes beyond the genuine change in the submission’s scientific merit?

Methods. We pair cited and uncited reviewers for each submitted paper and then carefully analyze the differences in their scores. Our analysis accounts for the many confounding factors that may exist. By pairing reviewers, we alleviate the confounding factor of “paper quality” as both cited and uncited reviewers review the same paper. We also control for confounders related to reviewer identities by accommodating various associated aspects such as the reviewers’ expertise and preferences in reviewing papers. Finally, we analyze reviews of uncited reviewers to exclude cases in which a reviewer genuinely decreases their evaluation of a paper because it fails to cite their own relevant past work. 

Key findings. Our findings suggest that citation bias exists, and papers enjoy higher scores from a cited reviewer. Due to this bias, the expected increase in a cited reviewer’s score is 0.16 (on a 6 point scale) in ICML and 0.23 (on a 5 point scale) in EC. For reference, a one-point increase of a score by a single reviewer improves the position of a submission by 11% on average.

Implications. We detect and quantify the strength of citation bias in peer review, informing stakeholders of the presence of the bias. Our work also raises an important open problem of mitigating citation bias.

Details. https://arxiv.org/pdf/2203.17239.pdf

Acknowledgements

The post is based on joint works with Ivan Stelmakh, Ryan Liu,  Xinwei Shen, Marina Meila, Shuchi Chawla, Federico Echenique, and Nihar B. Shah.

Read More

Assessing Generalization of SGD via Disagreement

Imagine training a deep network twice with two different random seeds on the same data, and then measuring the rate at which they disagree on unlabeled test points. Naively, they can disagree with one another with probability anywhere between zero and twice the error rate. But surprisingly, in practice, we observe that the disagreement and test error of deep neural network are remarkably close to each other. The variable (y) refers to the average generalization error of the two models and the variable (x) refers to the disagreement of the two models.

Estimating the generalization error of a model — how well the model performs on unseen data — is a fundamental component in any machine learning system. Generalization performance is traditionally estimated in a supervised manner, by dividing the labeled data into a training set and test set. However, high-quality labels are usually costly and, ideally, we would like to use all of them to train the model. On the other hand, in many real-world settings, a large amount of unlabeled data is readily available. How can we tap into the rich information in these unlabeled data and leverage them to assess a model’s performance without labels?

In this work (full paper), we demonstrate that a simple procedure can accurately estimate the generalization error with only unlabeled data. This result reveals a surprising fact about how neural networks make mistakes and their connection to calibration through an identity we call Generalization Disagreement Equality (GDE).

A surprising observation

Stochastic gradient descent (SGD) is perhaps the most popular optimization algorithm for deep neural networks. Due to the non-convex nature of the deep neural network’s optimization landscape, different runs of SGD will find different solutions. As a result, if the solutions are not perfect, they will disagree with each other on some of the unseen data. This disagreement can be harnessed to estimate generalization error without labels:

  1. Given a model, run SGD with the same hyperparameters but different random seeds on the training data to get two different solutions.
  2. Measure how often the networks’ predictions disagree on a new unlabeled test dataset.

We find that the disagreement rate is approximately equal to the average test error over the two models. Our observation builds on the phenomenon reported by Nakkiran and Bansal (2020) [1]: given two networks of the same architecture trained to zero training error on two independently drawn datasets of the same size, the disagreement rate of the pair on the test dataset is nearly equal to the average test error. Our observation generalizes prior work by showing that the same phenomenon holds for small changes in hyperparameters and, more importantly, the same dataset, which makes the procedure relevant for practical applications.

This procedure estimates the test error with unlabeled data, but the mechanisms behind it are not immediately obvious. Let (h(x)) be the prediction of classifier (h(x)) and (y) be the true label. By the triangle inequality, the disagreement rate can be anywhere between 0 and 2 times the test error: ( 0 leq mathbb{E}[h(x) ne h’(x)] leq mathbb{E}[h(x) ne y] + mathbb{E}[h’(x) ne y]). Given that the models observe the same data, we may expect the models to extract similar knowledge from the data and consequently make similar errors, which would make the disagreement rate lower than the test error. Yet, we find that in practice, the disagreement rate approximates the test error without any proportionality constant. Why is this the case?

The final parameters found by SGD depend on many sources of randomness: 1) random initialization, 2) random ordering of a fixed training dataset, and/or 3) random sampling of training data. To understand this phenomenon, we analyze what kind of randomness is responsible for this peculiar property. It is possible to have other sources of randomness, such as dropout, which are not studied in this work.

We study the phenomenon by observing how the behavior of disagreement changes when the different sources of randomness are isolated. For example, when examining the effect of different initialization,s we fix the dataset and the order in which the dataset is presented to the model and only change the random initialization. We observe that across different types of randomness, disagreement remains consistently close to the test error. This approach is particularly useful because we can use the same training data to train the two copies of the model. In addition, we empirically observe the phenomenon across a wide range of model architectures (ResNet, VGG, and fully-connected networks) and several popular image recognition benchmarks (MNIST, SVHN, Cifar10, and Cifar100).

GDE on CIFAR-10: The scatter plots of pair-wise model disagreement (x-axis) vs the test error (y-axis) of the different ResNet18 trained on CIFAR10. The dashed line is the diagonal line where disagreement equals the test error. Orange dots represent models that use data augmentation. The first two plots correspond to pairs of networks trained on independent datasets, and in the last two plots, on the same dataset.

Furthermore, estimating generalization error is especially important when the test distribution is different from the training distribution. On that note, we see some promising observations in the PACS data [2]. This dataset consists of 4 different environments (Photo, Art, Cartoon, Sketch) of different objects. For each environment, we trained two ResNet18 models on that environment with different initialization and order of the data and measured their disagreement rate on the remaining environments. We observe that similar phenomena hold across many (but not all) pairs of environments,  suggesting that the technique might be adaptable for estimating generalization error under distribution shift.

GDE under distribution shift: The scatter plots of pair-wise model disagreement (x-axis) vs the test error (y-axis) of the different ResNet50 trained on PACS. Each plot corresponds to models evaluated on the domain specified in the title. The source/training domain is indicated by different marker shapes.

Generalization-Disagreement Equality

We will now theoretically investigate the sufficient condition under which the disagreement rate equals generalization error.

In the deep learning literature, it is well-known that ensembles of SGD-trained deep networks are well-calibrated over the training distribution [3]. An ensemble is well-calibrated if the confidence ( tilde{h}_k(x) = mathbb{E}_{h_k sim H_A}[h(x)] ) it outputs for class ( k ) matches the expected accuracy i.e. ( P(y mid tilde{h}_k(x) = p) = p). There exist various formalisms for calibration. In this paper, we show that if an ensemble satisfies class-wise calibration for a distribution D (a more general form of calibration is discussed in the paper), then (mathbb{E}_{h, h’}[mathbb{E}_D[h(x) neq h'(x)]] = mathbb{E}_h[mathbb{E}_D[h(x) neq y]]) with strict equality. We refer to this equality as the Generalization Disagreement Equality (GDE).

Proof Sketch

The proof sketch we show here focuses on a special case, where class-wise calibration holds. Consider binary classification. Say that we have a distribution over the hypotheses ( H_A) given SGD algorithm (A) and we sample two hypotheses (h, h’) from this distribution. Now given some test data (D), we partition it by the confidence output by the ensemble i.e. (D_q = P(X | tilde{h}_0(x) = q)). For any fixed (x) in the support of (D_q), the disagreement rate in expectation over (h, h’) is

$$ E_{h,h’}[h(x) ne h'(x)] \ = P(tilde{h}(x) = 1)P(tilde{h}(x) = 0) + P(tilde{h}(x) = 0)P(tilde{h}(x) = 1) \ = q(1-q) + (1-q)q = 2q(1-q).$$

Additionally, note that for any (x in D), the expected error equals (tilde{h}_{1 – y}(x)). Since the ensemble is also class-wise calibrated, for any (x in D_q) the expected error over (h in H_A) is

$$E_h[h(x) ne y] \= tilde{h}_0(x)P(y = 1 | tilde{h}(x) = 1) + tilde{h}_1(x)P(y = 0 | tilde{h}(x) = 0) \ = q(1-q) + (1-q)q = 2q(1-q)$$

We proved that for each calibration level set, GDE holds. Since the disagreement rate 2q(1-q) equals the expected error 2q(1-q), we conclude that the expected error equals the disagreement rate. Since the disagreement rate 2q(1-q) equals the expected error 2q(1-q), we conclude that the expected error equals the disagreement rate. So in expectation over D, GDE holds.


Note the theorem only shows that the test error and expected disagreement are equal in expectation over all the models learned by SGD, but does not necessarily explain why the equality seems to hold for as few as a single pair of training runs. It captures the observation’s essence, but explaining why a single pair of models is sufficient is still an open problem. Intuitively, this can be true if the distribution of disagreement has an unusually small variance. In our experiments, we do observe very small variance for disagreement, but a rigorous theoretical discussion of why the variance is small remains unsettled.

Further, in practice, no models are perfectly calibrated, but we could still characterize how the deviation from calibration affects the GDE. We show that calibration error upper bounds the absolute difference between the expected disagreement and the expected calibration error. This inequality generalizes the GDE and implies that as long as the ensemble is reasonably calibrated, we can expect a good estimation of generalization error from disagreement. Finally, we empirically validate that this assumption often holds in practice by training 100 copies of the same model to produce an ensemble that is faithful to the true ensemble and measure their calibration error. The results show that these ensembles are indeed well-calibrated.

Conclusion

We have presented a method for estimating the generalization error of black-box deep neural networks with only unlabeled data, as well as some theoretical motivation for why the method works. Specifically, we showed that if the ensembles of the models trained by SGD are well-calibrated, the expected disagreement is equal to the expected test error. However, in practice, merely a single pair of models is sufficient for accurately estimating the generalization error. In the broader picture, this result marks a departure from the more traditional approaches of studying generalization and points to the tantalizing possibility of leveraging unlabeled data to estimate the generalization error. We are excited about the results we presented in the paper, but we are even more excited about the questions that we did not answer in the paper. We hope this work will encourage future work to investigate more unconventional ways of understanding generalization in deep learning.

Reference

[1] Nakkiran and Bansal. Distributional Generalization: A New Kind of Generalization. 2020.
[2] Li et al. Deeper, Broader and Artier Domain Generalization. IEEE Intl. Conf. on Computer Vision (ICCV), 2017.
[3] Lakshminarayanan et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. Conference on Neural Information Processing Systems (NeurIPS), 2017.

Read More

Why Spectral Normalization Stabilizes GANs: Analysis and Improvements

Figure 1: Training instability is one of the biggest challenges in training GANs. Despite the existence of successful heuristics like Spectral Normalization (SN) for improving stability, it is poorly-understood why they work. In our research, we theoretically explain why SN stabilizes GAN training. Using these insights, we further propose a better normalization technique for improving GANs’ stability called Bidirectional Scaled Spectral Normalization.

Generative adversarial networks (GANs) are a class of popular generative models enabling many cutting-edge applications such as photorealistic image synthesis. Despite their tremendous success, GANs are notoriously unstable to train—small hyper-parameter changes and even randomness in optimization can cause training to fail altogether, which leads to poor generated samples. One empirical heuristic that is widely used to stabilize GAN training is spectral normalization (SN) (Figure 2). Although it is very widely adopted, little is understood about why it works, and therefore there is little analytical basis for using it, configuring it, and more importantly, improving it.

Figure 2: Spectral normalization divides the weights (W_i) by their spectral norms (sigma(W_i)) (i.e., the largest singular value of (W_i)).

In this post, we discuss our recent work at NeurIPS 2021. We prove that spectral normalization controls two well-known failure modes of training stability: exploding and vanishing gradients. More interestingly, we uncover a surprising connection between spectral normalization and neural network initialization techniques, which not only help explain how spectral normalization stabilizes GANs, but also motivate us to design Bidirectional Scaled Spectral Normalization (BSSN), a simple change to spectral normalization that yields better stability than SN (Figure 3). 

Figure 3: The interesting connections we find between spectral normalization and prior initialization techniques: (1) LeCun initialization can help explain why spectral normalization avoids vanishing gradients; (2) Motivated by newer initialization techniques (Xavier and Kaiming), we propose BSSN to further improve spectral normalization.

Exploding and vanishing gradients cause training instability

Exploding and vanishing gradients describe a problem in which gradients either grow or shrink rapidly during training. It is known in the community that these phenomena are closely related to the instability of GANs. Figure 4 shows an illustrating example: when exploding and vanishing gradients happen, the sample quality measured by inception score (higher is better) deteriorates rapidly.

Figure 4: The close connection between gradient scales and training instability. Left: the gradient norm during the training of three GANs on CIFAR-10, either with exploding, vanishing, or stable gradients. Right: the inception score (measuring sample quality; the higher, the better) of these three GANs. We see that the GANs with bad gradient scales (exploding or vanishing) have worse sample quality as measured by inception score.

In the next section, we will show how spectral normalization alleviates exploding and vanishing gradients, which may explain its success.

How spectral normalization mitigates exploding gradients

The fact that spectral normalization prevents gradient explosion is not too surprising. Intuitively, it achieves this by limiting the ability of weight tensors to amplify inputs in any direction. More precisely, when the spectral norm of weights = 1 (as ensured by spectral normalization), and the activation functions are 1-Lipschitz (e.g., (Leaky)ReLU), we show that

$$ | text{gradient} |_{text{Frobenius}} leq sqrt{text{number of layers}} cdot | text{input} |.$$

(Please refer to the paper for more general results.) In other words, the gradient norm of spectrally normalized GANs cannot exceed a strict bound. This explains why spectral normalization can mitigate exploding gradients. 

Note that this good property is not unique to spectral normalization—our analysis can also be used to show the same result for other normalization and regularization techniques that control the spectral norm of weights, such as weight normalization and orthogonal regularization. The more surprising and important fact is that spectral normalization can also control vanishing gradients at the same time, as discussed below.

How spectral normalization mitigates vanishing gradients

To understand why spectral normalization prevents gradient vanishing, let’s take a brief detour to the world of neural network initialization. In 1996, LeCun, Bottou, Orr, and Müller introduced a new initialization technique (commonly called LeCun initialization) that aimed to prevent vanishing gradients. It achieved this by carefully setting the variance of the weight initialization distribution as $$text{Var}(W)=left(text{fan-in of the layer}right)^{-1},$$ where fan-in of the layer means the number of input connections from the previous layer (e.g., in fully-connected networks, fan-in of the layer is the number of neurons in the previous layer). LeCun et al. showed that

  • If the weight variance is larger than ( left(text{fan-in of the layer}right)^{-1} ), the internal outputs of the neural networks could be saturated by bounded activation or loss functions (e.g., sigmoid), which causes vanishing gradients.
  • If the weight variance is too small, gradients will also vanish because gradient norms are bounded by the scale of the weights.

We show theoretically that spectral normalization controls the variance of weights in a way similar to LeCun initialization. More specifically, for a weight matrix (Win mathbb{R}^{mtimes n}) with i.i.d. entries from a zero-mean Gaussian distribution (common for weight initialization), we show that

$$ text{Var}left( text{spectrally-normalized } W right) ~~~text{ is on the order of }~~~left( maxleft{ m,n right} right)^{-1} $$

(Please refer to the paper for more general results.) This result has separate implications on the fully-connected layers and convolutional layers:

  • For fully-connected layers with a fixed width across hidden layers, (maxleft{m,nright} =m =n =text{fan-in of the layer} ). Therefore, spectrally-normalized weights have exactly the desired variances as LeCun initialization!
  • For convolutional layers, the weight (i.e., convolution kernel) is actually a 4-dimensional tensor: ( W in mathbb{R}^{c_{out} c_{in} k_w k_h} ), where (c_{out},c_{in},k_w,k_h) denote the number of output channels, the number of input channels, kernel width, and kernel hight respectively. The popular implementation of spectral normalization normalizes the weights by ( frac{W}{sigmaleft( W_{c_{out} times left(c_{in} k_w k_hright)} right)} ) where (sigmaleft( W_{c_{out} times left(c_{in} k_w k_hright)} right)) is the spectral norm on the reshaped weight, i.e., ( m= c_{out}, n=c_{in} k_w k_h). In hidden layers, usually (maxleft{m,nright} =maxleft{c_{out}, c_{in} k_w k_hright} =c_{in} k_w k_h=text{fan-in of the layer} ). Therefore, spectrally-normalized convolutional layers also maintain the same desired variances as LeCun initialization!

Whereas LeCun initialization only controls the gradient vanishing problem at the beginning of training, we observe empirically that spectral normalization preserves this nice property throughout training (Figure 5). These results may help explain why spectral normalization controls vanishing gradients during GAN training.

Figure 5: Parameter variances throughout training. The blue lines show the parameter variances of different layers when SN is applied, and the orange line shows our theoretical bound at initialization: (left( maxleft{ m,n right} right)^{-1}). The parameter variances of SN are close to the bound throughout training.

How to improve spectral normalization

The next question we ask is: can we use the above theoretical insights to improve spectral normalization? Many advanced initialization techniques have been proposed in recent years to improve LeCun initialization, including Xavier initialization and Kaiming initialization. They derived better parameter variances by incorporating more realistic assumptions into the analysis. We propose Bidirectional Scaled Spectral Normalization (BSSN) so that the parameter variances parallel the ones in these newer initialization techniques:

  • Xavier initialization. The idea of Xavier initialization is to set the variance of parameter initialization distribution to be (text{Var}(W)=left(frac{text{fan-in of the layer} + text{fan-out of the layer}}{2}right)^{-1},) which they show to not only control the variances of outputs (as in LeCun initialization), but also the variances of backpropagated gradients, giving better gradient values. We propose Bidirectional Spectral Normalization that normalizes convolutional kernels by (frac{W}{left( sigmaleft(W_{c_{out} times left(c_{in} k_w k_hright)}right) +sigmaleft(W_{c_{in} times left(c_{out} k_w k_hright)}right) right)/2}~~~~~~~). We show that by doing this, the parameter variances mimic the ones in Xavier initialization.
  • Kaiming initialization. The analysis in LeCun and Xaiver initialization did not cover activation functions like (Leaky)ReLU which decrease the scales of the network outputs. To cancel out the effect of (Leaky)ReLU, Kaiming initialization scales up the variances in LeCun or Xavier initilization by a constant. Motivated from it, we propose to scale the above normalization formula with a tunable constant (c): (ccdotfrac{W}{left( sigmaleft(W_{c_{out} times left(c_{in} k_w k_hright)}right) +sigmaleft(W_{c_{in} times left(c_{out} k_w k_hright)}right) right)/2}~~~~~~~).

BSSN can be easily plugged into GAN training with minimal code changes and little computational overhead. We compare spectral normalization and BSSN on several image datasets, using standard metrics for image quality like inception score (higher is better) and FID (lower is better). We show that simply replacing spectral normalization with BSSN not only makes GAN training more stable (Figure 6), but also improves sample quality (Table 1). Generated samples from BSSN are in Figure 7.

Table 1: Inception score (IS) and FID. Our proposed BSSN method outperforms spectral normalization in sample quality metrics across different datasets by a large margin.
Figure 6: Inception score training curve in CIFAR10. Spectral normalization (in blue) exhibits (one type of) training instability: the sample quality drops as training proceeds. Our proposed BSSN (in orange) does not have the problem.
Figure 7: Generated samples from BSSN in CIFAR10 dataset.

Links

This post only covers a portion of our theoretical and empirical results. Please refer to our NeurIPS 2021 paper and code if you are interested in learning more.

Read More

Improving RL with Lookahead: Learning Off-Policy with Online Planning

Figure 1. Overview of LOOP: Model-free Reinforcement Learning learns a policy by training a value function. In this setting, the performance of the policy is dependent on the accuracy of the learned value function. We propose LOOP, an efficient framework to learn with a policy that finds the best action sequence using imaginary rollouts with a learned model. This allows LOOP to potentially reduce dependence on value function errors. LOOP achieves strong performance across a range of tasks and problem settings.

Model-Free Off-Policy Reinforcement Learning

Reinforcement learning (RL) enables artificial agents to learn different tasks by interacting with the environment. Within RL, off-policy methods have brought about numerous successes recently for efficiently learning behaviors in applications such as robotics due to their ability to leverage previously collected data efficiently and incorporate data from a variety of sources.

Figure 2. Illustration of a typical model-free reinforcement learning agent.

How does off-policy reinforcement learning work? A model-free off-policy reinforcement learning algorithm typically consists of a parameterized actor and a value function (see Figure 2). The actor interacts with the environment collecting the transitions in the replay buffer. The value function is trained using the transitions from the replay buffer to predict the cumulative return of the actor, and the actor is updated by maximizing the action-values at the states visited in the replay buffer. This framework suffers from the following issues:

  1. The performance of the actor is highly dependent on the accuracy of the learned value function. Learning an accurate value function is challenging in deep reinforcement learning with issues pointed out by previous works such as divergence, instability, rank loss, delusional bias and overestimation. 
  2. Traditionally in model-free RL methods, the parametrized actor is a neural network which is uninterpretable and inflexible in dealing with constraints during deployment. On the other hand, risk-sensitive domains such as healthcare or autonomous driving require us to reason about why the policy chose a particular action or incorporate safety constraints.

So, how should the actor choose actions if the value function is inaccurate? In this work, we suggest using a policy that looks ahead in the future using a learned model to find the best action sequence. This lookahead policy is more interpretable than the parametric actor and also allows us to incorporate constraints. Then we present a computationally efficient framework of learning with the lookahead policy that we call LOOP. We also show how LOOP can also be applied to the offline RL and safe RL along with the online RL setting.

H-step Lookahead Policy

In order to increase the performance, safety, and interpretability of reinforcement learning, we use online planning (“H-step lookahead”) with a terminal value function. In H-step lookahead, we use a learned dynamics model to roll out action sequences for H-horizon into the future and get the cumulative reward. To reason about the future reward beyond H steps, we attach a value function at the end of the rollout. The objective is to select the action sequence that will lead to rollout with the best cumulative return.

Stated formally, H-step lookahead objective aims to find an action sequence ((a_{0:H-1})) that maximizes the following objective:

$$max_{a_{0:H-1}} left[mathbb{E}_{hat{M}}[sum_{t=0}^{H-1}gamma^tr(s_t,a_t)+gamma^Hhat{V}(s_H)]right]$$

where (hat{M}) is the learned dynamics model, (hat{V}) is the learned terminal value function, (r) is the reward function and (gamma) is the discount factor.

H-step lookahead provides several benefits: 1. H-step lookahead reduces dependency on value function errors by using the model rollouts which allows it to trade-off value errors with model-errors. 2. H-step lookahead offers a degree of interpretability that is missing in fully parametric methods and 3. H-step lookahead allows the user to incorporate constraints (even non-stationary) and behavior priors during deployment. 

We can also provide theoretical guarantees that demonstrate using an H-step lookahead instead of a parametric actor (1-step greedy actor) can reduce dependency on value errors by a large margin while introducing a dependence on model errors. Despite the additional model errors, we argue that the H-step lookahead is useful as value errors can stem from several reasons as discussed in the previous section. In the low data regime, value errors can also stem from compounding sampling errors whereas the model can be expected to have smaller errors as it is trained with denser supervision using supervised learning. We hypothesize that these numerous sources of errors in value learning make the tradeoff of value-errors with model-errors beneficial and see empirical evidence for the same in our experiments.

LOOP: Learning Off-Policy with Online Planning

As described above, the H-step lookahead policy uses a terminal value function at the end of the H steps.  How do we learn the value function for this H-step lookahead policy?  The difficulty is that, in learning a value function, we need to evaluate the H-step lookahead policy from different states.  However, evaluating the H-step lookahead policy is somewhat slow (Lowrey et al.), since the lookahead policy requires simulating the dynamics for H-steps, which makes such an approach computationally expensive.

Figure 3. LOOP uses a parameterized actor instead of the H-step lookahead policy to learn the value function in a more computationally efficient manner.

Instead, we propose to learn a parameterized actor to more efficiently learn the terminal value function; to learn the value function, we can evaluate the actor (which is fast) instead of evaluating the H-step lookahead policy (which is slow).  We call this approach LOOP: Learning off-policy with online planning.  However, the problem with this approach is that there might be a difference between the H-step lookahead policy and the parametric actor (see Figure 3).  The difference between these policies can cause unstable learning, which we refer to as “actor divergence.” 

Figure 4. In LOOP, the H-step lookahead policy and the parameterized actor can be different policies causing unstable learning. We refer to this issue as “actor divergence”.

Our solution to actor divergence is to constrain the H-step lookahead policy based on the KL-divergence to a prior, where the prior is based on the parametric actor.  This constrained optimization helps ensure that the H-step lookahead policy remains similar to the parametric actor, leading to significantly more stable training. Specifically, we propose actor regularized control (ARC), which uses the following objective 

$$p^tau_{opt}=text{argmax}_{p^tau} mathbb{E}_{p^tau} left[mathbb{E}_{hat{M}}[R_{H,hat{V}}(s_t,tau)]right]~,~textrm{s.t}~~D_{KL}(p^tau||p^tau_{prior})le epsilon$$

The inner expectation estimates the return of the H-step lookahead (R_{H,hat{V}}) under model uncertainty while the outer expectation is under a distribution of action sequences.

In the above objective, we aim to find a distribution (p^tau=p^tau_{opt}) over the action sequence (tau) that maximizes the H-step lookahead return (R_{H,hat{V}}(s_t,tau)) while ensuring that the distribution of the action sequence is close to some predefined prior  (p^tau_{prior}). In ARC we set this prior to be equal to the parametrized actor and this ensures that H-step lookahead is close to the parametrized actor while still improving the cumulative return. This constrained optimization has a closed-form solution given by ( p^tau_{opt} propto p^tau_{prior} e^{frac{1}{eta}mathbb{E}_{hat{M}}[R_{H,hat{V}}(s_t,tau)]} ). Since the closed-form solution is unnormalized, we approximate it by a gaussian and improve the estimate of its mean and variance by iterative self-normalized importance sampling.

LOOP for Offline and Safe RL

In the previous section, we have seen that ARC optimizes for the expected return in the online RL setting.  LOOP can be extended to work in two other domains: 1. Offline RL: Learning from a fixed dataset of collected experience 2. Safe RL: Learning to maximize rewards which ensures that the constraint violations are below some threshold. 

For offline RL, ARC optimizes for the following underestimate of H-step lookahead return similar to previous offline RL methods ([1,2]).

$$text{mean}_{[K]}[R_{H,hat{V}}(s_t,tau)] – beta_{pess}text{std}_{[K]}[R_{H,hat{V}}(s_t,tau)]$$

where ([K]) denote model ensembles for uncertainty estimation and (beta_{pess}) is an hyperparameter.

Figure 5. LOOP can be used for offline RL by using a fixed dataset of transitions and modifying ARC to optimize for underestimate of the expected return.

In this setting, the off-policy algorithm is also replaced by an offline RL algorithm (see Figure 5).

For safe RL, ARC optimizes for a constrained H-step lookahead objective which ensures that the cumulative constraint cost in the planning horizon are less than the predefined threshold (see Figure 6).

$$text{argmax}_{a_t} mathbb{E}_{hat{M}}[R_{H,hat{V}}(s_t,tau)]~~text{s.t}~max_{[K]}sum_{t=t}^{t+H-1}gamma^t c(s_t,a_t)le d_0$$

where (c) is the cost function.

Figure 6. LOOP can be used for safe RL by modifying ARC to optimize under safety constraints.

Experiments: Online, Offline, and Safe RL

Online RL: We use SAC as the off-policy algorithm in LOOP and test it on a set of MuJoCo locomotion and manipulation tasks. LOOP is compared against a variety of baselines covering model-free (SAC), model-based (PETS-restricted), and hybrid model-free+model-based (MBPO, LOOP-SARSA, SAC-VE) methods. LOOP-SARSA is a variant of LOOP that evaluates the replay buffer policy in its critic.

Figure 7. Learning performance comparison for online RL of LOOP-SAC to model-based and model-free baselines on the MuJoCo locomotion and manipulation tasks.

LOOP-SAC significantly improves performance over SAC, the underlying off-policy algorithm used to learn the terminal value function. The increase in efficiency over the SAC  empirically confirms that model-error tradeoff with value-error is indeed beneficial. LOOP-SAC is also competitive to MBPO in locomotion tasks, outperforming it significantly in manipulation tasks.

Figure 8. Environments used to compare the performance of LOOP to baselines. From left: Walker-v2, Ant-v2, PenGoal-v0, Claw-v1, PointGoal-v1

Offline RL: We combine LOOP with two offline RL methods Critic Regularized Regression (CRR) and Policy in latent action space (PLAS) and test it on D4RL datasets. LOOP improves over CRR and PLAS with an average improvement of 15.91% and 29.49% respectively on the D4RL locomotion datasets. This empirically demonstrates that H-step lookahead improves performance over a pre-trained value function (obtained from offline RL) by reducing dependence on value errors.

Safe RL: For testing the safety performance of LOOP we experiment on the OpenAI safety gym environments. In the two environments, CarGoal and PointGoal, the agent needs to navigate to a goal while avoiding obstacles. 

Figure 9. Comparison of constraint violations (left two plots) and cumulative return (right two plots) on OpenAI safety gym environments of safeLOOP compared to baselines.

SafeLOOP (Figure above) is the modification of LOOP with constrained H-step lookahead that incorporates constraints. safeLOOP can learn orders of magnitude faster while still being safer than safeRL baselines.

Next Steps

A benefit of using H-step lookahead for deployment is its ability to incorporate non-stationary exploration priors, as this framework disentangles the exploitation policy (parametrized actor) and the exploration policy (H-step lookahead) to a certain degree. Exploring how more principled exploration techniques can enable data collection that leads to better policy improvement is an interesting future direction.    

Learning with H-step lookahead efficiently is challenging and unscalable. In our work, we demonstrated one particular way to learn efficiently with H-step lookahead but our approach introduced the issue of actor divergence. Some open questions are 1. What are other ways to learn efficiently with an H-step lookahead policy that does not suffer from actor divergence? 2. How can the actor divergence be reduced without restricting the H-step lookahead policy to be near the parametrized policy (eg. Offline RL)?

Further reading

If you’re interested in more details, please check out the links to the full paper, the project website, talk, and more!

Citation

This blog post is based on the following paper (BibTeX) :

Harshit Sikchi, Wenxuan Zhou, and David Held.
Learning Off-Policy with Online Planning.
In Conference of Robot Learning, November 2021.

Acknowledgments

Thanks to Wenxuan Zhou, Ben Eysenbach, Paul Liang, and David Held for feedback on this post! This material is based upon work supported by the United States Air Force and DARPA under Contract No. FA8750-18-C-0092, LG Electronics and the National Science Foundation under Grant No. IIS-1849154.

Read More

An Energy-based Perspective on Learning Observation Models

Fig. 1 We show that learning observation models can be viewed as shaping energy functions that graph optimizers, even non-differentiable ones, optimize. Inference solves for most likely states (x) given model and input measurements (z.) Learning uses training data to update observation model parameters (theta).

Robots perceive the rich world around them through the lens of their sensors. Each sensor observation is a tiny window into the world that provides only a partial, simplified view of reality. To complete their tasks, robots combine multiple readings from sensors into an internal task-specific representation of the world that we call state. This internal picture of the world enables robots to evaluate the consequences of possible actions and eventually achieve their goals. Thus, for successful operations, it is extremely important to map sensor readings into states in an efficient and accurate manner.

Conventionally, the mapping from sensor readings to states relies on models handcrafted by human designers. However, as sensors become more sophisticated and capture novel modalities, constructing such models becomes increasingly difficult. Instead, a more scalable way forward is to convert sensors to tensors and use the power of machine learning to translate sensor readings into efficient representations. This brings up the key question of this post: What is the learning objective? In our recent CoRL paper, LEO: Learning Energy-based Models in Factor Graph Optimization, we propose a conceptually novel approach to mapping sensor readings into states. In that, we learn observation models from data and argue that learning must be done with optimization in the loop.

How does a robot infer states from observations?

Consider a robot hand manipulating an occluded object with only tactile image feedback. The robot never directly sees the object: all it sees is a sequence of tactile images (Lambeta 2020, Yuan 2017). Take a look at any one such image (Fig. 2). Can you tell what the object is and where it might be just from looking at a single image? It seems difficult, right? This is why robots need to fuse information collected from multiple images.

Fig. 2 A robot hand receives a sequence of tactile image observations from which it must infer a sequence of latent object poses. We formulate this as an inference over a factor graph that in turn relies on a mapping between observations and states provided by an observation model. 

How do we fuse information from multiple observations? A powerful way to do this is by using a factor graph (Dellaert 2017). This approach maintains and dynamically updates a graph where variable nodes are the latent states and edges or factor nodes encode measurements as constraints between variables. Inference solves for the objective of finding the most likely sequence of states given a sequence of observations. Solving for this objective boils down to an optimization problem that can be computed efficiently in an online fashion.

Factor graphs rely on the user specifying an observation model that encodes how likely an observation is given a set of states. The observation model defines the cost landscape that the graph optimizer minimizes. However, in many domains, sensors that produce observations are complex and difficult to model. Instead, we would like to learn the observation model directly from data.

Can we learn observation models from data?

Let’s say we have a dataset of pairs of ground truth state trajectories (x_{gt}) and observations (z). Consider an observation model with learnable parameters (theta) that maps states and observations to a cost. This cost is then minimized by the graph optimizer to get a trajectory (hat{x}). Our objective is to minimize the end-to-end loss (L_{theta}(hat{x},x_{gt})) between the optimized trajectory (hat{x}) and the ground truth (x_{gt}).

What would a typical learning procedure look like? In the forward pass, we have a learned observation model feeding into the graph optimizer to produce an optimized trajectory (hat{x}) (Fig. 3). This is then compared to a ground truth trajectory (x_{gt}) to compute the loss (L_{theta}(.)). The loss is then back-propagated through the inference step to update (theta).

Fig. 3 Two problems arise when directly back-propagating through the optimizer. (a) Many state-of-the-art graph optimizers are not natively differentiable (b) Differentiation via unrolling sensitive to the optimization procedure used during training.

However, there are two problems with this approach. The first problem is that many state-of-the-art optimizers are not natively differentiable. For instance, the iSAM2 graph optimizer (Kaess 2012) used popularly for simultaneous localization and mapping (SLAM) invokes a series of non-differentiable Bayes tree operations and heuristics (Fig. 3a). Secondly, even if one did want to differentiate through the nonlinear optimizer, one would typically do this by unrolling successive optimization steps, and then propagating back gradients through the optimization procedure (Fig. 3b). However, training in this manner can have undesired effects. For example, prior work (Amos 2020) shows instances where even though unrolling gradient descent drives training loss to 0, the resulting cost landscape does not have a minimum on the ground truth. In other words, the learned cost landscape is sensitive to the optimization procedure used during training, e.g., the number of unrolling steps or learning rate.

Learning an energy landscape for the optimizer

We argue that instead of worrying about the optimization procedure, we should only care about the landscape of the energy or cost function that the optimizer has to minimize. We would like to learn an observation model that creates an energy landscape with a global minimum on the ground truth. This is precisely what energy-based models aim to do (LeCun 2006) and that is what we propose in our novel approach LEO that applies ideas from energy-based learning to our problem.

How does LEO work? Let us now demonstrate the inner workings of our approach on a toy example. For this, let us consider a one-dimensional visualization of the energy function represented in Fig 4. We collapse the trajectories down to a single axis. LEO begins by initializing with a random energy function. Note that the ground truth (x_{gt}) is far from the minimum. LEO samples trajectories (hat{x}) around the minimum by invoking the graph optimizer. It then compares these against ground truth trajectories and updates the energy function. The energy-based update rule is simple — push down the cost of ground truth trajectories  (x_{gt}) and push up the cost of samples (hat{x}), with the cost of samples effectively acting as a contrastive term. If we keep iterating on this process, the minimum of the energy function eventually centers around the ground truth. At convergence, the gradients of the samples (hat{x}) on average match the gradient of the ground truth trajectory (x_{gt}). Since the samples are over a continuous space of trajectories for which exact computation of the log partition function is intractable, we propose a way to generate such samples efficiently using an incremental Gauss-Newton approximation (Kaess 2012).

Fig. 4 Let’s initialize with a random energy function shown in blue. LEO begins by sampling trajectories (hat{x}) around the minimum by invoking the graph optimizer. It then compares these against ground truth trajectories and updates the energy function. The energy-based update rule is simple — push down the cost of ground truth trajectories and push up samples around the minimum. Upon repeating this process, the minimum of this function eventually centers around the ground truth. At convergence, gradients of samples (hat{x}) on average match gradient of the ground truth trajectory (x_{gt}).

How does LEO perform in practice? Let’s being with a simple regression problem (Amos 2020). We have pairs of ((x,y)) from a ground truth function (y=xsin(x)) and we would like to learn an energy function (E_theta(x,y)) such that (y = {operatorname{argmin}}_{y’} E_theta(x,y’)). LEO begins with a random energy function, but after a few iterations, learns an energy function with a distinct minimum around the ground truth function shown in solid line (Fig. 5). Contrast this to the energy functions learned by unrolling. Not only does it not have a distinct minimum around the ground truth, but it also varies with parameters like the number of unrolling iterations. This is because the learned energy landscape is specific to the optimization procedure used during learning.

Fig. 5 Visualize learned energy landscapes on a 1D regression problem. The goal is to learn a network that maps points (x,y) to an energy value (E_theta(x,y)) given samples from a ground truth function (y=xsin(x)) shown in solid line. LEO learns an energy landscape (E_theta(.)) with a distinct minimum (in white) around ground truth. In contrast, the energy landscape learned via unrolling does not have a distinct minimum (in white) and varies with optimization parameters during training.

Application to Robotics Problems

We evaluate LEO on two distinct robot applications, comparing it to baselines that either learn sequence-to-sequence networks or black-box search methods.

The first is a synthetic navigation task where the goal is to estimate robot poses from odometry and GPS observations. Here we are learning covariances, e.g., how noisy is GPS compared to odometry. Even though LEO is initialized far from the ground truth, it is able to learn parameters that pull the optimized trajectories close to the ground truth (Fig. 6).

Fig. 6 Synthetic navigation application. LEO learns covariance parameters every iteration to drive optimized trajectories close to ground truth.

We also look at a real-world manipulation task where an end-effector equipped with a touch sensor is pushing an object (Sodhi 2021). Here we learn a tactile model that maps a pair of tactile images to relative poses used in conjunction with physics and geometric priors. We show that LEO is able to learn parameters that pull optimized trajectories close to the ground truth or various object shapes and pushing trajectories (Fig. 7).

Fig. 7 Real-world planar pushing application. LEO learns model parameters every iteration to drive optimized trajectories close to ground truth.

Parting Thoughts

While we focused on learning observation models for perception, the insights on learning energy landscapes for an optimizer extend to other domains such as control and planning. An increasingly unified view of robot perception and control is that both are fundamentally optimization problems. For perception, the objective is to optimize a sequence of states that explain the observations. For control, the objective is to optimize a sequence of actions that accomplish a task.

But what should the objective function be for both of these optimizations? Instead of hand designing observation models for perception or hand designing cost functions for control, we should leverage machine learning to learn these functions from data. To do so easily at scale, it is imperative that we build robotics pipelines and datasets that facilitate learning with optimizers in the loop.

Paper: https://arxiv.org/abs/2108.02274
Code+Video: https://psodhi.github.io/leo

References

[1] Dellaert and Kaess. Factor graphs for robot perception. Foundations and Trends in Robotics, 2017.
[2] Kaess et al. iSAM2: Incremental smoothing and mapping using the Bayes tree. Intl. J. of Robotics Research (IJRR), 2012.
[3] LeCun et al. A tutorial on energy-based learning. Predicting structured data, 2006.
[4] Ziebart et al. Maximum entropy inverse reinforcement learning. AAAI Conf. on Artificial Intelligence, 2008.
[5] Amos and Yarats. The differentiable cross-entropy method. International Conference on Machine Learning (ICML), 2020.
[6] Yi et al. Differentiable factor graph optimization for learning smoothers. IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2021.
[7] Sodhi et al. Learning tactile models for factor graph-based estimation. IEEE Intl. Conf. on Robotics and Automation (ICRA), 2021.
[8] Lambeta et al. DIGIT: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation. IEEE Robotics and Automation Letters (RAL), 2020.

Read More

Understanding User Interfaces with Screen Parsing

This blog post summarizes our paper Screen Parsing: Towards Reverse Engineering of UI Models from Screenshots, which was published in the proceedings of UIST 2021. [This blog post was cross-posted in the ACM UIST 2021 Blog on 10/22/2021.]

The Benefits of Machines that Understand User Interfaces

Machines that understand and operate user interfaces (UIs) on behalf of users could offer many benefits. For example, a screen reader (e.g., VoiceOver and TalkBack) could facilitate access to UIs for blind and visually impaired users, and task automation agents (e.g., Siri Shortcuts and IFTTT) could allow users to automate repetitive or complex tasks with their devices more efficiently. These benefits are gated on how well these systems can understand an underlying app’s UI by reasoning about 1) the functionality present, 2) how its different components work together, and 3) how it can be operated to accomplish some goal. Many rely on the availability of UI metadata (e.g., the view hierarchy and the accessibility hierarchy), which provide some information about what elements are present and their properties. However, this metadata is often unavailable due to poor toolkit support and low developer awareness. To maximize their support of apps and when they are helpful to users, these systems can benefit from understanding UIs solely from visual appearance.

Recent efforts have focused on predicting the presence of an app’s on-screen elements and semantic regions solely from its visual appearance. These have enabled many useful applications: such as allowing assistive technology to work with inaccessible apps and example-based search for UI designers. However, they constitute only a surface-level understanding of UIs, as they primarily focus on extracting what elements are on a screen and where they appear spatially. To further advance the UI understanding capabilities of machines and perform more valuable tasks, we focus on modeling the higher-level relationships by predicting UI structure.

Our work makes the following contributions:

  • A problem defnition of screen parsing which is useful for a wide range of UI modeling applications
  • A description of our implementation and its training procedure
  • A comprehensive evaluation of our implementation with baseline comparison
  • Three implemented examples of how our model can be used to facilitate downstream applications such as (i) UI similarity, (ii) accessibility metadata generation, and (iii) code generation.

Achieving Better Understanding of UIs through Hierarchy

Example of an input screenshot and corresponding screen parse. The screen parse is a tree that connects all visible elements on the screen with edges describing their relationship.
An example of an input screen (Left) and the corresponding UI Hierarchy (Right). The tree contains all of the visible elements on the screen (the output is complete), groups them together to form higher-level structures (abstractive), and nodes can be used to reference UI elements (the output is grounded)

Structural representations enhance the understanding of many types of content by capturing higher-level semantics. For example, scene graphs enrich visual scenes by making sense of interactions between individual objects and parse trees disambiguate sentences by analyzing their grammar. Similarly, structure is a core property of UIs reflected in how they are constructed (i.e., stacking together views and widgets) and used. Modeling element relationships can help machines perceive UIs as humans do — not as a set of elements but as a coordinated presentation of content.

We introduce the problem of screen parsing, which we use to predict structured UI models (a high-level definition of a UI) from visual information. We focus on generating an app screen’s UI hierarchy, which specifies how UI elements are grouped and rendered on the screen. The following are properties of UI hierarchies:

  • Complete — the output is a single directed tree that spans all of the UI elements on a screen
  • Grounded — nodes in the output reference on-screen elements and regions
  • Abstractive — the output can group elements (potentially more than once) to form higher-level structures.

Predicting UI Hierarchy from a Screenshot

Diagram of our system's three steps to screen parsing. First, UI Element Detection detects on-screen elements using object detection. Next the UI Hierarchy is predicted which results in a tree-like structure. Finally, container nodes in the tree are labeled as groups.
An overview of our implementation of screen parsing. To infer the structure of an app screen, our system (i) detects the location and type of UI elements from a screenshot, (ii) predicts a graph structure that describes the relationships between UI elements, and (iii) classifies groups of UI elements.

To predict UI hierarchy from a screenshot, we built a system to:

  1. detect the location and type of UI elements from a screenshot,
  2. predict a hierarchical structure that describes the relationships between them, and
  3. classify semantic groups.

The first step of our system processes a screenshot image using an object detection model (Faster-RCNN), which produces a list of UI element detections. The output is post-processed using standard techniques such as confidence-thresholding and non-max suppression. This list tells us what elements are on the screen and where they are but does not provide any information about their relationship.

Next, we use a stack-pointer parsing model to generate a tree structure representing UI hierarchy. Like other transition-based parsers, our model incrementally predicts a tree structure by generating a sequence of actions that build connections between UI elements using a pointer mechanism. We made two modifications to the original stack-pointer dependency parsing to adapt the parsing model for UI hierarchies. First, we injected a “container” token into the input, allowing the model to create multi-level groupings. Second, we trained the model using a dynamic oracle to reduce exposure bias since the multi-level nature of UI hierarchies leads to exponentially more “optimal” action sequences that produce the same output.

To illustrate how our model predicts UI hierarchy, we will describe the inference process. A flat list of detected UI elements is encoded using a bi-directional LSTM encoder (producing a list l of encoded elements), and the final hidden state is fed to an LSTM decoder network augmented with two data structures: 1) a stack (s) which is used by the network as intermediate memory and 2) a set (v) which records the set of nodes already processed. The stack s is initialized with a special node that represents the root of the tree. At each timestep, the element on top of s and the last hidden state is fed into the decoder network, which outputs one of three actions:

  • Arc – A directed edge is created between the node on top of s (parent) and the node in l – v with the highest attention score (child). The child is pushed on s and added to v. This action attaches one of the detected UI elements onto the tree.
  • Emit – An intermediate node (represented as a zero-vector) is created and pushed onto s. This action helps the model represent container or “grouping” elements, such as lists, that do not exist in l.
  • Pop – s is popped. This occurs when the model has finished adding all of an element’s children to the tree structure.

This technique for generating parse trees is widely used in NLP, and it has been shown that a correct sequence of actions exists for any target tree. Note that this was originally shown for a limited subset of parse trees known as “projective” parse trees, but recent work has extended it to handle any type of tree.

Finally, we apply a set classification model to label containers (i.e., intermediate nodes) based on their descendants. We defined seven container types (including an “Other” class) that represent common groupings such as collections (e.g., lists, grids), tables, and tab bars.

The Apple Notes app is fed through Screen Parser. UI elements are iteratively inserted into a tree structure by our model, then intermediate nodes in the tree are assigned labels such as
Screen Parser uses a multi-step process to infer the UI hierarchy from a screenshot. Element detections are iteratively grouped together using a parsing model that produces a sequence of special actions called transitions (transition-based parsing).

We trained our models on two mobile UI datasets: (i) AMP dataset of ~130,000 iOS screens, and (ii) RICO, a publicly available dataset of ~80,000 Android screens. Both datasets were collected by crowdworkers who installed and explored popular apps across 20+ categories (in some cases excluding certain ones such as games, AR, and multimedia) on the iOS and Android app stores. Each dataset contains screenshots, annotated screens, and a type of metadata called a view hierarchy. The view hierarchy is an artifact generated during UI rendering that describes which interface widgets are used and “stacked” together to produce the final layout. Not all screens in our dataset contain this metadata, especially apps created using third-party UI toolkits or game engines. We apply heuristics to detect and exclude examples with missing or incomplete view hierarchies. The view hierarchies are similar to the presentation model we aim to predict, with a few differences, so we transform them into our target representation by applying graph smoothing, filtering, and element matching between different data sources.

More details about our machine learning models and training procedures can be found in our paper.

Experiments

We used several metrics (e.g., F1 score, graph edit distance) to perform a quantitative evaluation of our system using the test split of our mobile UI datasets. Our main point of comparison was a heuristic-based approach to inferring screen groupings used in previous work, and we found that our system was much more accurate in inferring UI hierarchy. We also found that our improved training procedure led to significant performance gains (23%) over standard methods for training parsers.

Bar chart showing the performance of each system using F1 score and Edit Distance. Screen Parser Dynamic consistently outperforms all baseline systems.

Our system’s performance is affected by a number of factors such as screen complexity and object detection errors. Accuracy is highest for screens up to 32 elements and degrades following that point, in part due to the increased number of actions the parsing model must correctly predict. Complex and crowded screens introduce the additional difficulty of detecting small UI elements, which our analysis with a matching-based oracle (computes best possible matching between object detection output and ground truth) shows as a limiting factor.

Chart comparing the F1 score of screen parser and baselines against the screens of increasing complexity. Performance of all systems declines for more complex screens, with highest drop occurring after 32 elements. Complexity is divided into 5 buckets of 0 to 16 elements, 16 to 32 elements, 32 to 48 elements, 48 to 64 elements, and more than 64 elements.

UI Hierarchy Facilitates and Improves Applications

We present a suite of example applications implemented using our screen parsing system. These applications show the versatility of our approach and how the predicted UI hierarchy facilitates many downstream tasks.

UI Similarity

Scatter plot of screens represented as 2-D points corresponding to similarity in embedding space. We show four pairs of screenshots where each pair is similar structurally, but has surface-level differences such as scaling, language, theme, and dynamic content.
We used our system to generate embedding vectors for different UI screens that capture their structure, instead of their surface-level appearance. We show that embeddings for the same screen are minimally affected by different display settings (e.g., scaling, language, theme, dynamic content).

Many recent efforts in modeling UIs have focused on representing them as fixed-length embedding vectors. These vectors can be trained to encode different properties of UI screens, such as layout, content, and style, and they can be fine-tuned to support downstream tasks. For example, a common application of embedding models is measuring screen similarity, which is represented by distance in embedding space. We believe the performance of such models can be improved by incorporating structural information, an important property of UIs.

The intermediate representation of our parsing model can be used to produce a screen embedding, which describes the hierarchical structure of an app. To generate an embedding of a UI, we feed it into our model and pool the last hidden state of the encoder. This includes information about the position, type, and structure of on-screen elements. Our structural embedding can help minimize variations from display settings such as (i) scaling, (ii) language, (iii) theme, and (iv) small dynamic changes. The properties of our embedding could be useful for some UI understanding applications, such as app crawling and information extraction where it would be beneficial to disentangle screen structure and appearance. For example, an app crawler’s next action should be conditioned on the UI elements present on the screen, not on the user’s current theme. An autoencoder trained on UI screenshots would not have this property.

Accessibility Improvement

An app screen processed with heuristics and our screen parsing model. We show that our model leads to fewer grouping errors, which is beneficial to screen reader navigation experience.
Element boxes are annotated using their navigation ordering, where the number represents how many swipes are needed to access the element when using a screen reader. While both results contain errors, in this case, Screen Parser correctly groups more elements, which decreases the number of swipes needed to access elements.

Recent work has successfully generated missing metadata for inaccessible apps by running an object detection model on the UI screenshot. Their approach to generating hierarchical data relies on manually defined heuristics that detect localized patterns between elements. However, these approaches may sometimes fail because they do not have access to global information necessary for resolving ambiguities.

In contrast, our implementation generates a UI hierarchy with a global view of the input, so it can overcome some of the limitations of heuristic-based approaches. We used the predicted UI hierarchy to group together the children of intermediate nodes of height 1 that contained at most one text label and used the X-Y cut algorithm to determine navigation order. The figure above shows an example where the grouping output from the Screen Parsing model is more accurate than the one produced by manually-defined. While this is not always the case, learning grouping rules from data requires much less human effort than manual heuristic engineering.

Code Generation

A UI screenshot re-rendered on a tablet form factor using the code generated by our system. Some errors are visible in the generated output, such as incorrect element ordering.
By mapping nodes in the UI hierarchy to declarative view-creation methods, we can generate code for a UI from its screenshot. Here, a restaurant app is re-rendered on a tablet form-factor.

Existing approaches to code generation also rely on heuristics to detect a limited subset of container types. We employed a technique used by compilers to generate code from abstract syntax trees (AST) (the visitor pattern for code generation) and applied it to the predicted UI hierarchy. Specifically, we performed a depth-first traversal of the UI hierarchy using a visitor function that generates code based on the current state (current node and stack). The visitor function emits a SwiftUI control (e.g., Text, Toggle, Button) at every leaf node and emits a SwiftUI container (e.g., VStack, HStack) at every intermediate node. Additional parameters required by view constructors, such as label text and background color were extracted using OCR and a small set of heuristics.

The resulting code describes the original UI using only relative constraints (even if the original UI was not), allowing it to act responsively to changes in screen size or device type. The generated code does not contain appearance and style information, which is sometimes necessary to render a similar-looking screen. Nevertheless, prior work has shown that such output can be a useful starting point for UI development, and we believe future work can improve upon our approach by detecting these properties.

Conclusion

To help machines better reason about the underlying structure and purpose of UIs, we introduced the problem of screen parsing, the prediction of structured UI models from visual information. Our problem formulation captures the structural properties of UIs and is well-suited for downstream applications that rely on UI understanding. We described the architecture and training procedure for our reference implementation, which predicts an app’s presentation model as a UI hierarchy with high accuracy, surpassing baseline algorithms and training procedures. Finally, we used our system to build three example applications: (i) UI similarity search, (ii) accessibility enhancement, and (iii) code generation from UI screenshots. Screen parsing is an important step towards full machine understanding of UIs and its many benefits, but there is still much left to do. We’re excited by the opportunities at the intersection of HCI and ML, and we encourage other researchers in the ML community to work with us to realize this goal.

Acknowledgements

Many people contributed to this work and gave feedback on this blog post: Xiaoyi Zhang, Jeff Nichols, and Jeff Bigham. This work was done while Jason Wu was an intern at Apple.

For more information about machine learning research at Apple, check out the Apple Machine Learning website.

Paper Citation

Jason Wu, Xiaoyi Zhang, Jeffrey Nichols, and Jeffrey P. Bigham. 2021. Screen Parsing: Towards Reverse Engineering of UI Models from Screenshots. In Proceedings of the 2021 ACM Symposium on User Interface Software & Technology (UIST). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3472749.3474763

Read More

Carnegie Mellon University at NeurIPS 2021

Carnegie Mellon University is proud to present 92 papers in the main conference and 9 papers in the datasets and benchmarks track at the 35th Conference on Neural Information Processing Systems (NeurIPS 2021), which will be held virtually this week. Additionally, CMU faculty and students are co-organizing 6 workshops and 1 tutorial, as well as giving 7 invited talks at the main conference and workshops. Here is a quick overview of the areas our researchers are working on:

Topics of CMU papers at NeurIPS 2021 (each paper may be assigned multiple topics).

We are also proud to collaborate with many other researchers in academia and industry:

Institutions with at least three external collaborators on CMU papers at NeurIPS 2021.

Conference Papers

Algorithms & Optimization

Adversarial Robustness of Streaming Algorithms through Importance Sampling
Vladimir Braverman (Johns Hopkins University) · Avinatan Hassidim (Google) · Yossi Matias (Tel Aviv University) · Mariano Schain (Google) · Sandeep Silwal (Massachusetts Institute of Technology) · Samson Zhou (Carnegie Mellon University)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot G1

Global Convergence of Gradient Descent for Asymmetric Low-Rank Matrix Factorization
Tian Ye (Carnegie Mellon University) · Simon Du (University of Washington)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot F3

Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance
Hongjian Wang (Carnegie Mellon University) · Mert Gurbuzbalaban (Rutgers University) · Lingjiong Zhu (Florida State University) · Umut Simsekli (Inria Paris / ENS) · Murat Erdogdu (University of Toronto)
Tue Dec 07 04:30 PM — 06:00 PM (PST) @ Poster Session 2 Spot E3

Few-Shot Data-Driven Algorithms for Low Rank Approximation
Piotr Indyk (MIT) · Tal Wagner (Microsoft Research Redmond) · David Woodruff (Carnegie Mellon University)
Tue Dec 07 04:30 PM — 06:00 PM (PST) @ Poster Session 2 Spot A3

Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds
Yujia Huang (Caltech) · Huan Zhang (UCLA) · Yuanyuan Shi (Caltech) · J. Zico Kolter (Carnegie Mellon University / Bosch Center for A) · Anima Anandkumar (NVIDIA / Caltech)
Tue Dec 07 04:30 PM — 06:00 PM (PST) @ Poster Session 2 Spot H2

Controlled Text Generation as Continuous Optimization with Multiple Constraints
Sachin Kumar (CMU) · Eric Malmi (Google) · Aliaksei Severyn (Google) · Yulia Tsvetkov (Department of Computer Science, University of Washington)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot D1

Greedy Approximation Algorithms for Active Sequential Hypothesis Testing
Kyra Gan (Carnegie Mellon University) · Su Jia (CMU) · Andrew Li (Carnegie Mellon University)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot B0

Optimal Sketching for Trace Estimation
Shuli Jiang (Carnegie Mellon University) · Hai Pham (Carnegie Mellon University) · David Woodruff (Carnegie Mellon University) · Richard Zhang (Google Brain)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot A0

Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing
Misha Khodak (CMU) · Renbo Tu (CMU, Carnegie Mellon University) · Tian Li (CMU) · Liam Li (Carnegie Mellon University) · Maria-Florina Balcan (Carnegie Mellon University) · Virginia Smith (Carnegie Mellon University) · Ameet S Talwalkar (CMU)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot C1

Linear and Kernel Classification in the Streaming Model: Improved Bounds for Heavy Hitters
Arvind Mahankali (Carnegie Mellon University) · David Woodruff (Carnegie Mellon University)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot E3

On Large-Cohort Training for Federated Learning
Zachary Charles (Google Research) · Zachary Garrett (Google) · Zhouyuan Huo (Google) · Sergei Shmulyian (Google) · Virginia Smith (Carnegie Mellon University)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot D1

The Skellam Mechanism for Differentially Private Federated Learning
Naman Agarwal (Google) · Peter Kairouz (Google) · Ken Liu (Carnegie Mellon University)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot C1

Federated Reconstruction: Partially Local Federated Learning
Karan Singhal (Google Research) · Hakim Sidahmed (Carnegie Mellon University) · Zachary Garrett (Google) · Shanshan Wu (University of Texas at Austin) · Keith Rush (Google) · Sushant Prakash (Google)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot C0

Habitat 2.0: Training Home Assistants to Rearrange their Habitat
Andrew Szot (Georgia Institute of Technology) · Alexander Clegg (Facebook (FAIR Labs)) · Eric Undersander (Facebook) · Erik Wijmans (Georgia Institute of Technology) · Yili Zhao (Facebook AI Research) · John Turner (Facebook) · Noah Maestre (Facebook) · Mustafa Mukadam (Facebook AI Research) · Devendra Singh Chaplot (Carnegie Mellon University) · Oleksandr Maksymets (Facebook AI Research) · Aaron Gokaslan (Facebook) · Vladimír Vondruš (Magnum Engine) · Sameer Dharur (Georgia Tech) · Franziska Meier (Facebook AI Research) · Wojciech Galuba (Facebook AI Research) · Angel Chang (Simon Fraser University) · Zsolt Kira (Georgia Institute of Techology) · Vladlen Koltun (Apple) · Jitendra Malik (UC Berkeley) · Manolis Savva (Simon Fraser University) · Dhruv Batra ()
Thu Dec 09 12:30 AM — 02:00 AM (PST) @ Poster Session 5 Spot E0

Joint inference and input optimization in equilibrium networks
Swaminathan Gurumurthy (Carnegie Mellon University) · Shaojie Bai (Carnegie Mellon University) · Zachary Manchester (Carnegie Mellon University) · J. Zico Kolter (Carnegie Mellon University / Bosch Center for A)
Thu Dec 09 04:30 PM — 06:00 PM (PST) @ Poster Session 7 Spot E1

On Training Implicit Models
Zhengyang Geng (Peking University) · Xin-Yu Zhang (TuSimple) · Shaojie Bai (Carnegie Mellon University) · Yisen Wang (Peking University) · Zhouchen Lin (Peking University)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot G2

Robust Online Correlation Clustering
Silvio Lattanzi (Google Research) · Benjamin Moseley (Carnegie Mellon University) · Sergei Vassilvitskii (Google) · Yuyan Wang (Carnegie Mellon University) · Rudy Zhou (CMU, Carnegie Mellon University)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot A1

Causality & Graphical Models

Can Information Flows Suggest Targets for Interventions in Neural Circuits?
Praveen Venkatesh (Allen Institute) · Sanghamitra Dutta (Carnegie Mellon University) · Neil Mehta (Carnegie Mellon University) · Pulkit Grover (CMU)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot A1

Identification of Partially Observed Linear Causal Models: Graphical Conditions for the Non-Gaussian and Heterogeneous Cases
Jeff Adams (University of Copenhagen) · Niels Hansen (University of Copenhagen) · Kun Zhang (CMU)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot C0

Instance-dependent Label-noise Learning under a Structural Causal Model
Nick Yao (University of Sydney) · Tongliang Liu (The University of Sydney) · Mingming Gong (University of Melbourne) · Bo Han (HKBU / RIKEN) · Gang Niu (RIKEN) · Kun Zhang (CMU)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot G3

Learning Treatment Effects in Panels with General Intervention Patterns
Vivek Farias (Massachusetts Institute of Technology) · Andrew Li (Carnegie Mellon University) · Tianyi Peng (Massachusetts Institute of Technology)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot F3

Reliable Causal Discovery with Improved Exact Search and Weaker Assumptions
Ignavier Ng (Carnegie Mellon University) · Yujia Zheng (Carnegie Mellon University) · Jiji Zhang (Lingnan University) · Kun Zhang (CMU)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot C1

Multi-task Learning of Order-Consistent Causal Graphs
Xinshi Chen (Georgia Institution of Technology) · Haoran Sun (Georgia Institute of Technology) · Caleb Ellington (Carnegie Mellon University) · Eric Xing (Petuum Inc. / Carnegie Mellon University) · Le Song (Georgia Institute of Technology)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot C3

Efficient Online Estimation of Causal Effects by Deciding What to Observe
Shantanu Gupta (Carnegie Mellon University) · Zachary Lipton (Carnegie Mellon University) · David Childers (Carnegie Mellon University)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot A0

Learning latent causal graphs via mixture oracles
Bohdan Kivva (University of Chicago) · Goutham Rajendran (University of Chicago) · Pradeep Ravikumar (Carnegie Mellon University) · Bryon Aragam (University of Chicago)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot C0

Computational Linguistics

BARTScore: Evaluating Generated Text as Text Generation
Weizhe Yuan (Carnegie Mellon University) · Graham Neubig (Carnegie Mellon University) · Pengfei Liu (Carnegie Mellon University)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot A3

Off-Policy Risk Assessment in Contextual Bandits
Audrey Huang (Carnegie Mellon University) · Liu Leqi (Carnegie Mellon University) · Zachary Lipton (Carnegie Mellon University) · Kamyar Azizzadenesheli (Purdue University)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot C2

SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
Enze Xie (The University of Hong Kong) · Wenhai Wang (Nanjing University) · Zhiding Yu (Carnegie Mellon University) · Anima Anandkumar (NVIDIA / Caltech) · Jose M. Alvarez (NVIDIA) · Ping Luo (The University of Hong Kong)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot A3

Dynamic population-based meta-learning for multi-agent communication with natural language
Abhinav Gupta (Facebook AI Research/CMU) · Marc Lanctot (DeepMind) · Angeliki Lazaridou (DeepMind)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot B2

(Implicit)^2: Implicit Layers for Implicit Representations
Zhichun Huang (CMU, Carnegie Mellon University) · Shaojie Bai (Carnegie Mellon University) · J. Zico Kolter (Carnegie Mellon University / Bosch Center for A)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot D1

Foundations of Symbolic Languages for Model Interpretability
Marcelo Arenas (Pontificia Universidad Catolica de Chile) · Daniel Báez (Universidad de Chile) · Pablo Barceló (PUC Chile & Millenium Instititute for Foundational Research on Data) · Jorge Pérez (Universidad de Chile) · Bernardo Subercaseaux (Carnegie Mellon University)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot C1

Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers
Mandela Patrick (University of Oxford) · Dylan Campbell (University of Oxford) · Yuki Asano (University of Amsterdam) · Ishan Misra (Facebook AI Research) · Florian Metze (Carnegie Mellon University) · Christoph Feichtenhofer (Facebook AI Research) · Andrea Vedaldi (University of Oxford / Facebook AI Research) · João Henriques (University of Oxford)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot C3

Computer Vision

Dynamics-regulated kinematic policy for egocentric pose estimation
Zhengyi Luo (Carnegie Mellon University) · Ryo Hachiuma (Keio University) · Ye Yuan (Carnegie Mellon University) · Kris Kitani (Carnegie Mellon University)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot D3

NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in the Wild
Jason Zhang (Carnegie Mellon University) · Gengshan Yang (Carnegie Mellon University) · Shubham Tulsiani (UC Berkeley) · Deva Ramanan (Carnegie Mellon University)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Virtual @ Poster Session 1 Spot D0

SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency
Devendra Singh Chaplot (Carnegie Mellon University) · Murtaza Dalal (Carnegie Mellon University) · Saurabh Gupta (UIUC) · Jitendra Malik (UC Berkeley) · Russ Salakhutdinov (Carnegie Mellon University)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot A1

Interesting Object, Curious Agent: Learning Task-Agnostic Exploration
Simone Parisi (Facebook) · Victoria Dean (CMU) · Deepak Pathak (Carnegie Mellon University) · Abhinav Gupta (Facebook AI Research/CMU)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot I3

TöRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis
battal Attal (Carnegie Mellon University) · Eliot Laidlaw (Brown University) · Aaron Gokaslan (Facebook) · Changil Kim (Facebook) · Christian Richardt (University of Bath) · James Tompkin (Brown University) · Matthew O’Toole (Carnegie Mellon University)
Thu Dec 09 04:30 PM — 06:00 PM (PST) @ Poster Session 7 Spot E3

ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction
Gengshan Yang (Carnegie Mellon University) · Deqing Sun (Google) · Varun Jampani (Google) · Daniel Vlasic (Massachusetts Institute of Technology) · Forrester Cole (Google Research) · Ce Liu (Microsoft) · Deva Ramanan (Carnegie Mellon University)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot A2

Deep Learning

Can fMRI reveal the representation of syntactic structure in the brain?
Aniketh Janardhan Reddy (University of California Berkeley) · Leila Wehbe (Carnegie Mellon University)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot A3

PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair
Zimin Chen (KTH Royal Institute of Technology, Stockholm, Sweden) · Vincent J Hellendoorn (CMU) · Pascal Lamblin (Google Research – Brain Team) · Petros Maniatis (Google Brain) · Pierre-Antoine Manzagol (Google) · Daniel Tarlow (Microsoft Research Cambridge) · Subhodeep Moitra (Google, Inc.)
Tue Dec 07 04:30 PM — 06:00 PM (PST) @ Poster Session 2 Spot A3

Rethinking Neural Operations for Diverse Tasks
Nick Roberts (University of Wisconsin-Madison) · Misha Khodak (CMU) · Tri Dao (Stanford University) · Liam Li (Carnegie Mellon University) · Chris Ré (Stanford) · Ameet S Talwalkar (CMU)
Tue Dec 07 04:30 PM — 06:00 PM (PST) @ Poster Session 2 Spot H1

Neural Additive Models: Interpretable Machine Learning with Neural Nets
Rishabh Agarwal (Google Research, Brain Team) · Levi Melnick (Microsoft) · Nicholas Frosst (Google) · Xuezhou Zhang (UW-Madison) · Ben Lengerich (Carnegie Mellon University) · Rich Caruana (Microsoft) · Geoffrey Hinton (Google)
Wed Dec 08 12:30 AM — 02:00 AM (PST) @ Poster Session 3 Spot A2

Emergent Discrete Communication in Semantic Spaces
Mycal Tucker (Massachusetts Institute of Technology) · Huao Li (University of Pittsburgh) · Siddharth Agrawal (Carnegie Mellon University) · Dana Hughes (Carnegie Mellon University) · Katia Sycara (CMU) · Michael Lewis (University of Pittsburgh) · Julie A Shah (MIT)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot A1

Why Spectral Normalization Stabilizes GANs: Analysis and Improvements
Zinan Lin (Carnegie Mellon University) · Vyas Sekar (Carnegie Mellon University) · Giulia Fanti (CMU)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot C3

Can multi-label classification networks know what they don’t know?
Haoran Wang (Carnegie Mellon University) · Weitang Liu (UC San Diego) · Alex Bocchieri (University of Wisconsin, Madison) · Sharon Li (University of Wisconsin-Madison)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot D3

Influence Patterns for Explaining Information Flow in BERT
Kaiji Lu (Carnegie Mellon University) · Zifan Wang (Carnegie Mellon University) · Peter Mardziel (Carnegie Mellon University) · Anupam Datta (Carnegie Mellon University)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot C0

Local Signal Adaptivity: Provable Feature Learning in Neural Networks Beyond Kernels
Stefani Karp (Carnegie Mellon University/Google) · Ezra Winston (Carnegie Mellon University) · Yuanzhi Li (CMU) · Aarti Singh (CMU)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot G3

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification
Shiqi Wang (Columbia) · Huan Zhang (UCLA) · Kaidi Xu (Northeastern University) · Xue Lin (Northeastern University) · Suman Jana (Columbia University) · Cho-Jui Hsieh (UCLA) · J. Zico Kolter (Carnegie Mellon University / Bosch Center for A)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot F1

Luna: Linear Unified Nested Attention
Max Ma (University of Southern California) · Xiang Kong (Carnegie Mellon University) · Sinong Wang (Facebook AI) · Chunting Zhou (Language Technologies Institute, Carnegie Mellon University) · Jonathan May (University of Southern California) · Hao Ma (Facebook AI) · Luke Zettlemoyer (University of Washington and Facebook)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot A3

Parametric Complexity Bounds for Approximating PDEs with Neural Networks
Tanya Marwah (Carnegie Mellon University) · Zachary Lipton (Carnegie Mellon University) · Andrej Risteski (CMU)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot A1

Estimation & Inference

Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation
Divyansh Jhunjhunwala (Carnegie Mellon University) · Ankur Mallick (CMU, Carnegie Mellon University) · Advait Gadhikar (Carnegie Mellon University) · Swanand Kadhe (University of California Berkeley) · Gauri Joshi (Carnegie Mellon University)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot G2

Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification
Youngseog Chung (Carnegie Mellon University) · Willie Neiswanger (Carnegie Mellon University) · Ian Char (Carnegie Mellon University) · Jeff Schneider (CMU)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot G0

Lattice partition recovery with dyadic CART
OSCAR HERNAN MADRID PADILLA (University of California, Los Angeles) · Yi Yu (The university of Warwick) · Alessandro Rinaldo (CMU)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Virtual @ Poster Session 6 Spot D2

Mixture Proportion Estimation and PU Learning:A Modern Approach
Saurabh Garg (CMU) · Yifan Wu (Carnegie Mellon University) · Alexander J Smola (NICTA) · Sivaraman Balakrishnan (Carnegie Mellon University) · Zachary Lipton (Carnegie Mellon University)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot B2

Fairness & Interpretability

Fair Sortition Made Transparent
Bailey Flanigan (Carnegie Mellon University) · Greg Kehne (Harvard University) · Ariel Procaccia (Carnegie Mellon University)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot B1

Learning Theory

A unified framework for bandit multiple testing
Ziyu Xu (Carnegie Mellon University) · Ruodu Wang (University of Waterloo) · Aaditya Ramdas (CMU)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot C1

Learning-to-learn non-convex piecewise-Lipschitz functions
Maria-Florina Balcan (Carnegie Mellon University) · Misha Khodak (CMU) · Dravyansh Sharma (CMU) · Ameet S Talwalkar (CMU)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot E1

Rebounding Bandits for Modeling Satiation Effects
Liu Leqi (Carnegie Mellon University) · Fatma Kilinc Karzan (Carnegie Mellon University) · Zachary Lipton (Carnegie Mellon University) · Alan Montgomery (Carnegie Mellon University)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot A3

Sharp Impossibility Results for Hyper-graph Testing
Jiashun Jin (CMU Statistics) · Tracy Ke Ke (Harvard University) · Jiajun Liang (Purdue University)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot A2

Dimensionality Reduction for Wasserstein Barycenter
Zach Izzo (Stanford University) · Sandeep Silwal (Massachusetts Institute of Technology) · Samson Zhou (Carnegie Mellon University)
Tue Dec 07 04:30 PM — 06:00 PM (PST) @ Poster Session 2 Spot B3

Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond
Maria-Florina Balcan (Carnegie Mellon University) · Siddharth Prasad (Computer Science Department, Carnegie Mellon University) · Tuomas Sandholm (CMU, Strategic Machine, Strategy Robot, Optimized Markets) · Ellen Vitercik (University of California, Berkeley)
Tue Dec 07 04:30 PM — 06:00 PM (PST) @ Poster Session 2 Spot E1

Faster Matchings via Learned Duals
Michael Dinitz (Johns Hopkins University) · Sungjin Im (University of California, Merced) · Thomas Lavastida (Carnegie Mellon University) · Benjamin Moseley (Carnegie Mellon University) · Sergei Vassilvitskii (Google)
Tue Dec 07 04:30 PM — 06:00 PM (PST) @ Poster Session 2 Spot A1

Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free Reinforcement Learning
Gen Li (Tsinghua University) · Laixi Shi (Carnegie Mellon University) · Yuxin Chen (Caltech) · Yuantao Gu (Tsinghua University) · Yuejie Chi (Carnegie Mellon University)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot H3

Universal Approximation Using Well-Conditioned Normalizing Flows
Holden Lee (Duke University) · Chirag Pabbaraju (Carnegie Mellon University) · Anish Sevekari Sevekari (Carnegie Mellon University) · Andrej Risteski (CMU)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot B1

Boosted CVaR Classification
Runtian Zhai (Carnegie Mellon University) · Chen Dan (Carnegie Mellon University) · Arun Suggala (Carnegie Mellon University) · J. Zico Kolter (Carnegie Mellon University / Bosch Center for A) · Pradeep Ravikumar (Carnegie Mellon University)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Virtual @ Poster Session 6 Spot D0

Data driven semi-supervised learning
Maria-Florina Balcan (Carnegie Mellon University) · Dravyansh Sharma (CMU)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot C3

Multi-Agent Systems

Stateful Strategic Regression
Keegan Harris (Carnegie Mellon University) · Hoda Heidari (Carnegie Mellon University) · Steven Wu (Carnegie Mellon University)
Tue Dec 07 04:30 PM — 06:00 PM (PST) @ Poster Session 2 Spot G0

Revenue maximization via machine learning with noisy data
Ellen Vitercik (University of California, Berkeley) · Tom Yan (Carnegie Mellon University)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot C0

Equilibrium Refinement for the Age of Machines: The One-Sided Quasi-Perfect Equilibrium
Gabriele Farina (Carnegie Mellon University) · Tuomas Sandholm (CMU, Strategic Machine, Strategy Robot, Optimized Markets)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot A2

Subgame solving without common knowledge
Brian Zhang (Carnegie Mellon University) · Tuomas Sandholm (CMU, Strategic Machine, Strategy Robot, Optimized Markets)
Thu Dec 09 04:30 PM — 06:00 PM (PST) @ Poster Session 7 Spot B2

Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization
Shicong Cen (Carnegie Mellon University) · Yuting Wei (Carnegie Mellon University) · Yuejie Chi (Carnegie Mellon University)
Thu Dec 09 04:30 PM — 06:00 PM (PST) @ Poster Session 7 Spot B1

Privacy & Robustness

Adversarially robust learning for security-constrained optimal power flow
Priya Donti (Carnegie Mellon University) · Aayushya Agarwal (Carnegie Mellon University) · Neeraj Vijay Bedmutha (Carnegie Mellon University) · Larry Pileggi (Carnegie Mellon University) · J. Zico Kolter (Carnegie Mellon University / Bosch Center for A)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot B3

Robustness between the worst and average case
Leslie Rice (Carnegie Mellon University) · Anna Bair (Carnegie Mellon University) · Huan Zhang (UCLA) · J. Zico Kolter (Carnegie Mellon University / Bosch Center for A)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot E1

Relaxing Local Robustness
Klas Leino (School of Computer Science, Carnegie Mellon University) · Matt Fredrikson (CMU)
Tue Dec 07 04:30 PM — 06:00 PM (PST) @ Poster Session 2 Spot A1

Iterative Methods for Private Synthetic Data: Unifying Framework and New Methods
Terrance Liu (Carnegie Mellon University) · Giuseppe Vietri (University of Minnesota) · Steven Wu (Carnegie Mellon University)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot G1

Reinforcement Learning

Monte Carlo Tree Search With Iteratively Refining State Abstractions
Samuel Sokota (Carnegie Mellon University) · Caleb Y Ho (Independent Researcher) · Zaheen Ahmad (University of Alberta) · J. Zico Kolter (Carnegie Mellon University / Bosch Center for A)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot B1

Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting
Gen Li (Tsinghua University) · Yuxin Chen (Caltech) · Yuejie Chi (Carnegie Mellon University) · Yuantao Gu (Tsinghua University) · Yuting Wei (Carnegie Mellon University)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot H2

When Is Generalizable Reinforcement Learning Tractable?
Dhruv Malik (Carnegie Mellon University) · Yuanzhi Li (CMU) · Pradeep Ravikumar (Carnegie Mellon University)
Wed Dec 08 04:30 PM — 06:00 PM (PST) @ Poster Session 4 Spot C2

An Exponential Lower Bound for Linearly Realizable MDP with Constant Suboptimality Gap
Yuanhao Wang (Tsinghua University) · Ruosong Wang (Carnegie Mellon University) · Sham Kakade (Harvard University & Microsoft Research)
Thu Dec 09 04:30 PM — 06:00 PM (PST) @ Poster Session 7 Spot D3

Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
Ben Eysenbach (Google AI Resident) · Sergey Levine (University of Washington) · Russ Salakhutdinov (Carnegie Mellon University)
Thu Dec 09 04:30 PM — 06:00 PM (PST) @ Poster Session 7 Spot E1

Robust Predictable Control
Ben Eysenbach (Google AI Resident) · Russ Salakhutdinov (Carnegie Mellon University) · Sergey Levine (University of Washington)
Thu Dec 09 04:30 PM — 06:00 PM (PST) @ Poster Session 7 Spot E0

Discovering and Achieving Goals via World Models
Russell Mendonca (Carnegie Mellon University) · Oleh Rybkin (University of Pennsylvania) · Kostas Daniilidis (University of Pennsylvania) · Danijar Hafner (Google) · Deepak Pathak (Carnegie Mellon University)
Thu Dec 09 04:30 PM — 06:00 PM (PST) @ Poster Session 7 Spot F0

Functional Regularization for Reinforcement Learning via Learned Fourier Features
Alexander Li (Carnegie Mellon University) · Deepak Pathak (Carnegie Mellon University)
Thu Dec 09 04:30 PM — 06:00 PM (PST) @ Poster Session 7 Spot I3

Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives
Murtaza Dalal (Carnegie Mellon University) · Deepak Pathak (Carnegie Mellon University) · Russ Salakhutdinov (Carnegie Mellon University)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot E1

No RL, No Simulation: Learning to Navigate without Navigating
Meera Hahn (Georgia Institute of Technology) · Devendra Singh Chaplot (Carnegie Mellon University) · Shubham Tulsiani (UC Berkeley) · Mustafa Mukadam (Facebook AI Research) · James M Rehg (Georgia Institute of Technology) · Abhinav Gupta (Facebook AI Research/CMU)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot E1

Transfer Learning

Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?
Petar Stojanov (Carnegie Mellon University) · Zijian Li (Guangdong University of Technology) · Mingming Gong (University of Melbourne) · Ruichu Cai (Guangdong University of Technology) · Jaime Carbonell (None) · Kun Zhang (CMU)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot A3

Learning Domain Invariant Representations in Goal-conditioned Block MDPs
Beining Han (Tsinghua University) · Chongyi Zheng (CMU, Carnegie Mellon University) · Harris Chan (University of Toronto, Vector Institute) · Keiran Paster (University of Toronto) · Michael Zhang (University of Toronto / Vector Institute) · Jimmy Ba (University of Toronto / Vector Institute)
Thu Dec 09 04:30 PM — 06:00 PM (PST) @ Poster Session 7 Spot H0

Property-Aware Relation Networks for Few-Shot Molecular Property Prediction
Yaqing Wang (Baidu Research) · Abulikemu Abuduweili (Carnegie Mellon University) · Quanming Yao (4paradigm)
Thu Dec 09 04:30 PM — 06:00 PM (PST) @ Poster Session 7 Spot A0

Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution
Amrith Setlur (Carnegie Mellon University) · Oscar Li (Carnegie Mellon University) · Virginia Smith (Carnegie Mellon University)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot C0

Weak Supervision

Automatic Unsupervised Outlier Model Selection
Yue Zhao (Carnegie Mellon University) · Dr. Rossi Rossi (Purdue University) · Leman Akoglu (CMU)
Tue Dec 07 08:30 AM — 10:00 AM (PST) @ Poster Session 1 Spot F3

End-to-End Weak Supervision
Salva Rühling Cachay (Technical University of Darmstadt) · Benedikt Boecking (Carnegie Mellon University) · Artur Dubrawski (Carnegie Mellon University)
Thu Dec 09 08:30 AM — 10:00 AM (PST) @ Poster Session 6 Spot D0

Robust Contrastive Learning Using Negative Samples with Diminished Semantics
Songwei Ge (University of Maryland, College Park) · Shlok Mishra (University of Maryland, College Park) · Chun-Liang Li (Google) · Haohan Wang (Carnegie Mellon University) · David Jacobs (University of Maryland, USA)
Fri Dec 10 08:30 AM — 10:00 AM (PST) @ Poster Session 8 Spot B0

Datasets and Benchmarks Track

CrowdSpeech and Vox DIY: Benchmark Dataset for Crowdsourced Audio Transcription
Nikita Pavlichenko · Ivan Stelmakh · Dmitry Ustalov
Tue Dec 07 8:30 a.m. PST — 10 a.m. PST @ Dataset and Benchmark Poster Session 1 Spot C1

MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
Paul Pu Liang · Yiwei Lyu · Xiang Fan · Zetian Wu · Yun Cheng · Jason Wu · Leslie (Yufan) Chen · Peter Wu · Michelle A. Lee · Yuke Zhu · Ruslan Salakhutdinov · LP Morency
Wed Dec 08 midnight PST — 2 a.m. PST @ Dataset and Benchmark Poster Session 2 Spot D2

The CLEAR Benchmark: Continual LEArning on Real-World Imagery
Zhiqiu Lin · Jia Shi · Deepak Pathak · Deva Ramanan
Wed Dec 08 midnight PST — 2 a.m. PST @ Dataset and Benchmark Poster Session 2 Spot D2

Argoverse 2.0: Next Generation Datasets for Self-driving Perception and Forecasting
Benjamin Wilson · William Qi · Tanmay Agarwal · John Lambert · Jagjeet Singh · Siddhesh Khandelwal · Bowen Pan · Ratnesh Kumar · Andrew Hartnett · Jhony Kaesemodel Pontes · Deva Ramanan · Peter Carr · James Hays
Wed Dec 08 midnight PST — 2 a.m. PST @ Dataset and Benchmark Poster Session 2 Spot D3

RB2: Robotic Manipulation Benchmarking with a Twist
Sudeep Dasari · Jianren Wang · Joyce Hong · Shikhar Bahl · Yixin Lin · Austin Wang · Abitha Thankaraj · Karanbir Chahal · Berk Calli · Saurabh Gupta · David Held · Lerrel Pinto · Deepak Pathak · Vikash Kumar · Abhinav Gupta
Thu Dec 09 8:30 a.m. PST — 10 a.m. PST @ Dataset and Benchmark Poster Session 3 Spot D3

Therapeutics Data Commons: Machine Learning Datasets and Tasks for Drug Discovery and Development
Kexin Huang · Tianfan Fu · Wenhao Gao · Yue Zhao · Yusuf Roohani · Jure Leskovec · Connor Coley · Cao Xiao · Jimeng Sun · Marinka Zitnik
Fri Dec 10 8:30 a.m. PST — 10 a.m. PST @ Dataset and Benchmark Poster Session 4 Spot E1

Revisiting Time Series Outlier Detection: Definitions and Benchmarks
Henry Lai · Daochen Zha · Junjie Xu · Yue Zhao · Guanchu Wang · Xia Hu
Fri Dec 10 8:30 a.m. PST — 10 a.m. PST @ Dataset and Benchmark Poster Session 4 Spot E0

TenSet: A Large-scale Program Performance Dataset for Learned Tensor Compilers
Lianmin Zheng · Ruochen Liu · Junru Shao · Tianqi Chen · Joseph Gonzalez · Ion Stoica · Ameer Haj-Ali
Fri Dec 10 8:30 a.m. PST — 10 a.m. PST @ Dataset and Benchmark Poster Session 4 Spot E2

Neural Latents Benchmark ‘21: Evaluating latent variable models of neural population activity
Felix Pei · Joel Ye · David M Zoltowski · Anqi Wu · Raeed Chowdhury · Hansem Sohn · Joseph O’Doherty · Krishna V Shenoy · Matthew Kaufman · Mark Churchland · Mehrdad Jazayeri · Lee Miller · Jonathan Pillow · Il Memming Park · Eva Dyer · Chethan Pandarinath
Fri Dec 10 8:30 a.m. PST — 10 a.m. PST @ Dataset and Benchmark Poster Session 4 Spot C2

Organized Workshops

Artificial Intelligence for Humanitarian Assistance and Disaster Response Workshop
Ritwik Gupta · Esther Rolf · Robin Murphy · Eric Heim
Mon Dec 13 09:00 AM — 05:30 PM (PST)

CtrlGen: Controllable Generative Modeling in Language and Vision
Steven Y. Feng · Drew Arad Hudson · Tatsunori Hashimoto · DONGYEOP Kang · Varun Prashant Gangal · Anusha Balakrishnan · Joel Tetreault
Mon Dec 13 08:00 AM — 12:00 AM (PST)

Machine Learning for Autonomous Driving
Xinshuo Weng · Jiachen Li · Nick Rhinehart · Daniel Omeiza · Ali Baheri · Rowan McAllister
Mon Dec 13 07:50 AM — 06:30 PM (PST)

Self-Supervised Learning – Theory and Practice
Pengtao Xie · Ishan Misra · Pulkit Agrawal · Abdelrahman Mohamed · Shentong Mo · Youwei Liang · Christin Jeannette Bohg · Kristina N Toutanova
Tue Dec 14 07:00 AM — 04:30 PM (PST)

Ecological Theory of RL: How Does Task Design Influence Agent Learning?
Manfred Díaz · Hiroki Furuta · Elise van der Pol · Lisa Lee · Shixiang (Shane) Gu · Pablo Samuel Castro · Simon Du · Marc Bellemare · Sergey Levine
Tue Dec 14 05:00 AM — 02:40 PM (PST)

Math AI for Education: Bridging the Gap Between Research and Smart Education
Pan Lu · Yuhuai Wu · Sean Welleck · Xiaodan Liang · Eric Xing · James McClelland
Tue Dec 14 08:55 AM — 06:05 PM (PST)

Organized Tutorials

ML for Physics and Physics for ML
Shirley Ho · Miles Cranmer
Mon Dec 06 09:00 AM — 01:00 PM (PST)

Invited Talks

Main conference: How Duolingo Uses AI to Assess, Engage and Teach Better
Luis von Ahn
Tue Dec 07 07:00 AM — 08:30 AM (PST)

I (Still) Can’t Believe It’s Not Better: A workshop for “beautiful” ideas that “should” have worked
Cosma Shalizi
Mon Dec 13 04:50 AM — 02:40 PM (PST)

CtrlGen: Controllable Generative Modeling in Language and Vision
Yulia Tsvetkov
Mon Dec 13 08:00 AM — 12:00 AM (PST)

Machine Learning for Autonomous Driving
Jeff Schneider
Mon Dec 13 07:50 AM — 06:30 PM (PST)

New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership
Virginia Smith
Mon Dec 13 05:30 AM — 04:00 PM (PST)

Self-Supervised Learning – Theory and Practice
Louis-Philippe Morency
Tue Dec 14 07:00 AM — 04:30 PM (PST)

Learning and Decision-Making with Strategic Feedback
Steven Wu
Tue Dec 14 07:00 AM — 02:30 PM (PST)

Read More