How Facebook partners with academia to help drive innovation in energy-efficient technology

Facebook is committed to sustainability and the fight against climate change. That’s why in September 2020 we announced our commitment to reaching net-zero emissions across our value chain in 2030. Part of Facebook’s sustainability efforts involve data center efficiency — from building servers that require less energy to run to developing a liquid cooling system that uses less water.

To learn more about these data center sustainability efforts at Facebook and how we’re engaging with the academic community in this space, we sat down with Dharmesh Jani (“DJ”), the Open Ecosystem Lead on the hardware engineering team and the Open Compute IC Chair, and Dr. Katharine Schmidtke, Director of Sourcing at Facebook for application-specific integrated circuits and custom silicon. DJ and Schmidtke’s teams are working to achieve four goals:

  1. Extend Facebook data center equipment lifecycle and make our gear reusable by others.
  2. Improve energy efficiency in Facebook infrastructure via hardware and software innovations.
  3. Reduce carbon-heavy content in our data centers.
  4. Work with industry and academia to drive innovation on sustainability across our value chain.

DJ and Schmidtke discuss why it’s important to build data centers that are as energy efficient as possible, how we’re working with and supporting academia and industry partners in this space, and potential research challenges that the Facebook researchers and engineers could tackle next.

Building energy-efficient data centers

For over a decade, Facebook has been committed to sustainability and energy efficiency. In 2009, Facebook built its first company-owned data center in Prineville, Oregon, one of the world’s most energy-efficient data centers with a power usage effectiveness (PUE) ratio between 1.06 and 1.08. In 2011, Facebook shared its designs with the public and — along with other industry experts — launched the Open Compute Project (OCP), a rapidly growing global community whose mission is to design, use, and enable mainstream delivery of the most efficient designs for scalable computing. However, there’s more to be done.

“On average, data centers use 205 TWh of electricity per year, which is the equivalent of 145 metric tons of CO2 emissions,” explains DJ. “With the growth of hyperscale data centers in the coming years, this emission is going to increase dramatically if mitigation is not considered today (source 1, source 2). Facebook wants to work to address this growing emission, as well, to ensure we run efficient operations and achieve our goal of net-carbon zero in 2030.”

According to DJ, Facebook is doing multiple things to address these problems: “The sustainability team within Facebook is working across organizations to align on the goals that lead to reduction in carbon. Circularity is one of the emerging efforts within infrastructure to increase equipment life cycle, which has the biggest impact on the net-zero-carbon effort. We’re driving sustainability and circularity efforts in the industry through the Open Compute Project,” he says.

Data center construction itself also contributes to carbon emission. High-utilization efficiency on already-built data centers is the key to reducing new data center construction demand. Over the years, Facebook has been developing a suite of industry-leading technologies to control and manage the peak power demand of data centers. As a result, many more servers can be hosted in existing data centers with limited power capacity. This has led to more than 50% data center construction demand reduction. The technology is developed in-house with the help of academic collaborations and research internship programs. Some key research findings and hyper-scale industrial operation experience are also shared back to the community via top academic conference publications. Here are some examples: Dynamo: Facebook’s Data Center-Wide Power Management System, Coordinated Priority-aware Charging of Distributed Batteries in Oversubscribed Data Centers.

Learn more about Facebook data center efficiency on the Tech@ blog, and read our latest Sustainability Report on Newsroom.

Partnerships and collaborations

Developing energy-efficient technology isn’t something that industry can do alone, which is why we often partner with experts in academia and support their pioneering work. “Facebook has launched a number of research collaborations directed at power reduction and energy efficiency over the past few years,” Schmidtke says. “Recently, Facebook sponsored the Institute of Energy Efficiency at UC Santa Barbara with a gift of $1.5 million over three years. We hope our contribution will help foster research in data center energy efficiency.”

“Another example is the ongoing research collaboration with Professor Clint Schow at UCSB,” Schmidtke says. “The project is focused on increasing the efficiency of optical interconnect data transmission between servers in our data center network. The research has just entered its second phase and is targeting highly efficient coherent optical links for data transmission.”

Facebook is also an industry member of Center for Energy-Smart Electronic Systems (partnering with the University of Texas at Arlington) and Future Renewable Electric Energy Delivery and Management Systems Engineering Research Center (at North Carolina State University).

In addition to fostering innovation within the academic community, Facebook is leveraging industry partners. According to DJ, “We’re looking to drive sustainability-related initiatives within the OCP community to align other industry players across the value chain. We plan to define sustainability as one of the OCP tenets so that all future contributions can focus on it.”

What’s next

DJ offers three sustainability challenges that researchers in the field could tackle next, all of which would involve industry collaborations with academia and other research organizations.

One research challenge is making computation more carbon neutral. The AI field’s computing demands have witnessed exponential growth: Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time — a 300,000x increase in compute from AlexNet to AlphaGo Zero. “How can we make AI more efficient while the current approach of increasing computation is not viable?” says DJ. “This is one of the biggest challenges in the field, so I’m eager to see more Green AI initiatives.”

Another challenge is scheduling workloads (WL) within data centers when carbon intensity is low. “We have to think of the amount of WL coming into the data centers and complex interactions to optimize for such use cases,” explains DJ. “I hope to see novel algorithmic ways of reducing energy consumption, distributing workloads, and impacting carbon emissions.”

An additional potential area of focus is technology that utilizes chiplets. Chiplets can be thought of as reusable, mix-and-match building blocks that come together to form more complex chips, which is a more efficient system that uses a smaller carbon footprint. “I’m looking forward to new computer architectures that are domain specific and driven by chiplets,” says DJ. “We have only explored the tip of the iceberg in terms of sustainability. There is much we can do together in this space to further the goal of a greener tomorrow.”

Facebook is committed to open science, and we value our partnerships with industry and academia. We are confident that together we can help drive technology and innovation forward in this space.

The post How Facebook partners with academia to help drive innovation in energy-efficient technology appeared first on Facebook Research.

Read More

Q&A with Clemson University’s Bart Knijnenburg, research award recipient for improving ad experiences

In this monthly interview series, we turn the spotlight on members of the academic community and the important research they do — as partners, collaborators, consultants, or independent contributors.

For February, we nominated Bart Knijnenburg, assistant professor at Clemson University. Knijnenburg is a 2019 UX-sponsored research award recipient in improving ad experiences, whose resulting research was nominated for Best Paper at the 54th Hawaii International Conference on System Sciences (HICSS). Knijnenburg has also been involved in the Facebook Fellowship Program as the adviser of two program alumni, Moses Namara and Daricia Wilkinson.

In this Q&A, Knijnenburg describes the work he does at Clemson, including his recently nominated research in improving ad experiences. He also tells us what inspired this research, what the results were, and where people can learn more.

Q: Tell us about your role at Clemson and the type of research you and your department specialize in.

Bart Knijnenburg: I am an assistant professor in the Human-Centered Computing division of the Clemson University School of Computing. Our division studies the human aspects of computing through user-centered design and user experiments, with faculty members who study virtual environments, online communities, adaptive user experiences, etc. My personal interest lies in helping people make better decisions online through adaptive consumer decision support. Within this broad area, I have specialized in usable recommender systems and privacy decision-making.

In the area of recommender systems, I focus on usable mechanisms for users of such systems to input their preferences, and novel means to display and explain the resulting recommendations to users. An important goal I have in this area is to build systems that don’t just show users items that reflect their preferences, but help users better understand what their preferences are to begin with — systems I call “recommender systems for self-actualization.”

In the area of privacy decision-making, I focus on systems that actively assist consumers in their privacy decision-making practices — a concept I have dubbed “user-tailored privacy.” These systems should help users translate their privacy preferences into settings, thereby reducing the users’ burden of control while at the same time respecting their inherent privacy preferences.

Q: What inspired you to pursue your recent research project in improving ad experiences?

BK: Despite recent efforts to improve the user experience around online ads, there is a rise of distrust and skepticism around the collection and use of personal data for advertising purposes. There are a number of reasons for this distrust, including a lack of transparency and control. This lack of transparency and control not only generates mistrust, but also makes it more likely that the user models created by ad personalization algorithms reflect users’ immediate desires rather than their longer-term goals. The presented ads, in turn, tend to reflect these short-term likes, ignoring users’ ambitions and their better selves.

As someone who has worked extensively on transparency and control in both the field of recommender systems and the field of privacy, I am excited to apply this work to the area of ad experiences. In this project, my team therefore aims to design, build, and evaluate intuitive explanations of the ad recommendation process and interaction mechanisms that allow users to control this process. We will build these mechanisms in line with the nascent concepts of recommender systems for self-actualization and user-tailored privacy. The ultimate goal of this effort is to make advertisements more aligned with users’ long-term goals and ambitions.

Q: What were the results of this research?

BK: The work on this project is still very much ongoing. Our first step has been to conduct a systematic literature review on ad explanations, covering existing research on how they are generated, presented, and perceived by users. Based on this review, we developed a classification scheme that categorizes the existing literature on ad explanations offering insights into the reasoning behind the ad recommendation, the objective of the explanation, the content of the explanation, and how this content should be presented. This classification scheme offers a useful tool for researchers and practitioners to synthesize existing research on ad explanations and to identify paths for future research.

Our second step involves the development of a measurement instrument to evaluate ad experiences. The validation of this measurement instrument is still ongoing, but the end result will entail a carefully constructed set of questionnaires that can be used to users’ reactions toward online ads, including aspects of targeting accuracy, accountability, transparency, control, reliability, persuasiveness, and creepiness.

A third step involves a fundamental redesign of the ad experience on social networks, reimagining the very concept of advertising as a means to an end that serves the longer-term goals of the user. We are still in the very early stages of this activity, but we aim to explore the paradigm of recommendations, insights, and/or personal goals as a vehicle for this transformation of the ad experience.

Q: How has this research been received so far?

BK: Our paper on the literature review and the classification scheme of ad explanations was accepted to HICSS and was nominated as the Best Paper in the Social Media and e-Business Transformation minitrack. We are working on an interactive version of the classification scheme that provides a convenient overview of and direct access to the most relevant research in the area of ad explanations.

We are also working with Facebook researchers to make sure that our ad experience measurement instrument optimally serves their goal of creating a user-friendly ad experience.

Q: Where can people learn more about your research?

BK: You can find a project page about this research at www.usabart.nl/FBads. We will keep this page updated when new results become available!

The post Q&A with Clemson University’s Bart Knijnenburg, research award recipient for improving ad experiences appeared first on Facebook Research.

Read More

Improving attitudes about mask wearing via Facebook ad campaigns

Wearing masks is an important part of the COVID-19 response, but the adoption of mask-wearing varies by geography and demographics. We know from the literature that social norms and attitudes around mask-wearing are among the factors that determine whether people actually wear masks. To help meet this urgent need, we recently evaluated two campaigns leveraging social norms and attitudes to improve mask-wearing behavior. These campaigns were run on our ads platform and measured using Brand Lift.

The first campaign used interest-based targeting of posts by public figures posting with the #wearamask hashtag. Within two days of people’s seeing the ad, we asked them via survey: “When you think of most people whose opinions you value, how much would they approve of you wearing a mask to help slow the spread of COVID-19?” Of those in the control group, 69.4 percent selected “A great deal” or “Quite a bit,” and 77.4 percent of those in the test group selected these desired options (other responses were “Somewhat,” “A little,” and “Not at all”). Thus, this campaign resulted in an eight-point increase at 99 percent confidence in the percentage of those reporting in-group approval for personal mask-wearing. That represents over 2 million people out of the 26 million who were reached during the campaign.

The second was the “You Will See Me” ad campaign developed by the Ad Council in partnership with the CDC and CDC Foundation. It was designed for Black Americans, considering the disproportionate impact COVID-19 has had on the Black community. We then asked the following via survey: “In the last 2 days, how often did you wear a mask in public to slow the spread of the coronavirus (COVID-19)?” 79.4 percent who were exposed to the campaign answered “Often” or “Always” versus 75.5 percent in the control group (other responses were “Sometimes,” “Rarely,” and “Never”). Thus, this campaign resulted in more than a three-point increase at 99 percent confidence in those reporting wearing masks in public frequently. That represents over 200,000 people out of six million who were reached during the campaign.

The results demonstrate that interventions like these can have significant impact, and we’re now working with public health partners to scale similar projects as part of our COVID-19 response. For more information about what Facebook is doing to keep people safe and informed about the coronavirus, read the latest updates on Newsroom.

The post Improving attitudes about mask wearing via Facebook ad campaigns appeared first on Facebook Research.

Read More

Accelerating rural connectivity research: How Facebook helps bring connectivity to hard-to-reach areas

All deployment site photos from Peru were taken by our partners at Mayu Telecomunicaciones are used here with permission. To request permission to use the photos, contact servicios@mayutel.com.

Facebook Connectivity’s mission is to enable better, broader global connectivity to bring more people online to a faster internet. We collaborate with others in the industry — including telecom operators, community leaders, technology developers, and researchers — in order to find solutions that are scalable and sustainable. One major research area of interest is rural connectivity, as many rural areas around the world still don’t have access to mobile connectivity and technology innovations are needed. An important element of rural connectivity is backhaul, the links that connect remote sites to the core network of the internet. Wireless backhaul using microwave radio provides low-cost, fast deployment in comparison with other options.

Today, the design of microwave backhaul relies on clear line-of-sight (LOS) requirements. Unfortunately, for rural areas, lack of LOS between settlements means that a repeater or reflector has to be built, which leads to cost constraints. In this project, we explore the use of diffraction, a physics phenomenon through which some wireless signal energy is bent into the geometric shadow of the obstacle. If diffraction can be predicted reliably, it could be used to design and build wireless backhaul links in challenging environments, reducing the need to build repeaters and making network design more efficient.

Example physics-based signal propagation modeling result showing that some signal energy is diffracted into the shadow region

Illustration of how diffractive NLOS wireless links can reduce the need to build repeaters

To address this challenge, Facebook Connectivity developed a research partnership with university and industry partners. We recognized that we need field data that can be used to validate and calibrate signal prediction algorithms, improved network design methodologies, and an assessment of real-world cost-coverage impact. To facilitate knowledge sharing and collaboration, Facebook Connectivity organized a number of meetings, including a workshop in 2019. In this workshop, Omar Tupayachi Calderon (CEO of Mayu Telecomunicaciones, a rural mobile infrastructure operator in Peru) shared that “Peru has an incredible diversity of challenges, and 60,000 rural settlements still do not have broadband connectivity. We need your help.”

Bringing rural connectivity to Peru

Universidad Politécnica de Madrid (UPM), The Ohio State University (OSU), Air Electronics, and Plexus Controls developed instruments to measure signal propagation over difficult terrain and conducted systematic experiments in southern Ohio, in areas near Madrid, Spain, and in southern Ontario, Canada. University of Michigan, George Mason University, OSU, and MIT developed propagation models, resulting in a number of publications and open source software.

 

 

 

 

Experimental data and setups used by OSU and UPM (click to enlarge)

The complete solution set that Facebook developed included an end-to-end workflow for link design, network planning, and site deployment, which we are sharing as a white paper in the Telecom Infra Project Network as a Service Solution project group.

 

 

 

 

Rural Peru deployment pictures taken by Mayutel (click to enlarge)

Scaling the solution

To make the solution usable in as many parts of the world as possible, Facebook took several next steps:

First, we collaborated with OSU and George Mason University to make a MATLAB and Python version of the Irregular Terrain Model and Longley-Rice algorithm available as free, open source software.

Second, we broadened the collaboration to include Contract Telecommunications LTD — the makers of Pathloss, the most widely used microwave link planning software in the world — to implement the outputs of this project into their platform.

Third, we developed a field-grade drone-mounted measurement kit with Plexus Controls to enable experimentalists to gather field data economically, and for connectivity infrastructure developers to validate signal strength in the field prior to building their sites. Further, we developed a software for data visualization and basic processing. The drone and the software are designed to enable faster, simpler field experiments and validation than ever before.

Fourth, we are contributing our learnings to the Telecom Infra Project Network as a Service Solution Group.

Finally, we have expanded our partnership through collaborations with TeleworX, Internet para Todos (IpT) de Peru, and Mayu Telecomunicaciones (Mayutel). IpT de Peru is a major network operator that is significantly expanding broadband access in rural parts of the country. Founded in 2019, IpT has deployed hundreds of broadband sites in rural areas of Peru to date. IpT has successfully deployed dozens of NLOS links in their network, providing both end point and backbone transport connectivity. Mayutel works with the local communities in rural Peru to build the telecom sites, deploy 4G radio systems, and provide broadband connectivity for the first time to many of the community.

Learn more

As we look forward to bringing this solution to other parts of the world, please learn more about the technology behind this project through our publications:

To learn about NLOS in the Telecom Infra Project Network-as-a-Service Solutions project group, please see our recently published white paper on the subject. For more about the Telecom Infra Project, visit their website. You can also learn about other initiatives on the Facebook Connectivity website.

The post Accelerating rural connectivity research: How Facebook helps bring connectivity to hard-to-reach areas appeared first on Facebook Research.

Read More

Q&A with Ayesha Ali, two-time award winner of Facebook request for research proposals in misinformation

Facebook is a place where bright minds in computer science come to work on some of the world’s most complex and challenging research problems. In addition to recruiting top talent, we maintain close ties with academia and the research community to collaborate on difficult challenges and find solutions together. In this new monthly interview series, we turn the spotlight on members of the academic community and the important research they do — as partners, collaborators, consultants, or independent contributors.

This month, we reached out to Ayesha Ali, professor at Lahore University of Management Sciences (LUMS) in Pakistan. Ali is a two-time winner of the Facebook Foundational Integrity Research request for proposals (RFP) in misinformation and polarization (2019 and 2020). In this Q&A, Ali shares the results of her research, its impact, and advice for university faculty looking to follow a similar path.

Q: Tell us about your role at LUMS and the type of research you and your department specialize in.

Ayesha Ali: I joined the Department of Economics at LUMS in 2016 as an assistant professor, after completing my PhD in economics at the University of Toronto. I am trained as an applied development economist, and my research focuses on understanding and addressing policy challenges facing developing countries, such as increasing human development, managing energy and environment, and leveraging technology for societal benefit. Among the themes that I am working on is how individuals with low levels of digital literacy perceive and react to content on social media, and how that affects their beliefs and behavior.

Q: How did you decide to pursue research projects in misinformation?

AA: Before writing the first proposal back in 2018, I had been thinking about the phenomenon of misinformation and fabricated content for quite some time. On multiple occasions, I had the opportunity to interact with colleagues in the computer science department on this issue, and we had some great discussions about it.

We quickly realized that we cannot combat misinformation with technology alone. It is a multifaceted issue. To address this problem, we need the following: user education, technology for filtering false news, and context-specific policies for deterring false news generation and dissemination. We were particularly interested in thinking about the different ways we could educate people who have low levels of digital literacy to recognize misinformation.

Q: What were the results of your first research project, and what are your plans for the second one?

AA: In our first project, we studied the effect of two types of user education programs in helping people recognize false news using a randomized field experiment. Using a list of actual news stories circulated on social media, we create a test to measure the extent to which people are likely to believe misinformation. Contrary to their perceived effectiveness, we found no significant effect of video-based general educational messages about misinformation.

However, when video-based educational messages were augmented with personalized feedback based on individuals’ past engagement with false news, there was a significant improvement in their ability to recognize false news. Our results show that, when appropriately designed, educational programs can be effective in making people more discerning consumers of information on social media.

Our second project aims to build on this research agenda. We plan to focus on nontextual misinformation, such as audio deepfakes. Audio messages are a popular form of communication among people with low levels of literacy and digital literacy. Using surveys and experiments, we will examine how people perceive, consume, and engage with information received via audio deepfakes, and what is the role of prior beliefs and analytical ability in forming perceptions about the accuracy of such information. We also plan to design and experimentally evaluate an educational intervention to increase people’s ability to identify audio deepfakes.

Q: What is the impact of your research in your region and globally?

AA: I think there are at least three ways in which our work is having an impact:

  1. Our work raises awareness about the importance of digital literacy campaigns in combating misinformation. It shows that such interventions hold promise in making users more discerning consumers of information if they are tailored to the target population (e.g., low literacy populations).
  2. Our work can affect policy about media literacy campaigns and how to structure them, especially for low digital literacy populations. We are already in touch with various organizations in Pakistan to see how our findings can be put to use in various digital literacy campaigns. For example, the COVID-19 vaccination is likely to be made available in the coming months, and there is a need to raise awareness about its importance and to proactively dispel any conspiracy theories and misinformation about them. Past experiences with polio vaccination campaigns have shown that conspiracy theories can take strong root and even endanger human lives.
  3. We hope that work will motivate others to work on such global societal challenges, especially in developing countries.

Q: What advice would you give to academics looking to get their research funded?

AA: I think there are three ingredients in a good research proposal:

  1. It tackles an important problem that ideally has contextual/local relevance.
  2. It demonstrates a well-motivated solution or a plan that has contextual/local relevance.
  3. It shows or at least makes the case for why you are uniquely placed to solve it well.

Q: Where can people learn more about your research?

AA: They can learn about my research on my webpage.

The post Q&A with Ayesha Ali, two-time award winner of Facebook request for research proposals in misinformation appeared first on Facebook Research.

Read More

Sample-efficient exploration of trade-offs with parallel expected hypervolume improvement

Sample-efficient exploration of trade-offs with parallel expected hypervolume improvement

What the research is:

q-Expected Hypervolume Improvement (qEHVI) is a new sample-efficient method for optimizing multiple competing expensive-to-evaluate black-box functions. Traditional methods for multiobjective black-box optimization include evolutionary strategies that are robust and can efficiently generate a large batch of candidate designs to evaluate on the true functions in parallel, but they require many evaluations to converge to the set of Pareto optimal trade-offs.

However, in the case when the objectives are expensive to evaluate, sample efficiency is critical. In this case, Bayesian optimization is commonly used to evaluate designs. Typically, candidates are generated sequentially (e.g., using Expected Hypervolume Improvement). In addition, candidate generation usually involves numerically optimizing an acquisition function, which in existing approaches often does not provide gradients.

In this work, we propose a new acquisition function for multiobjective Bayesian optimization that 1) enables generating multiple candidates in parallel or asynchronously with proper uncertainty propagation over the pending candidate points, 2) generates candidates quickly using exact gradients, 3) yields state-of-the-art optimization performance, and 4) has desirable theoretical convergence guarantees.

qEHVI has several use cases across Facebook. For example, it is being used to tune parameters in Instagram’s recommendation systems, where it enables product teams to understand the optimal trade-offs between user engagement and CPU utilization, and has identified policies that yielded simultaneous improvement in both objectives. qEHVI has also been used to optimize the reward functions for the contextual bandit algorithms to determine video compression rates at upload time for Facebook and Instagram. This allows us to identify the set of optimal trade-offs between video upload quality and reliability, which has led to improved quality of service.

How it works:

In objective optimization, there typically is no single best solution; rather, the goal is to identify the set of Pareto optimal solutions such that improving any objective means deteriorating another.A natural measure of the quality of a Pareto frontier in the outcome space is the hypervolume that is dominated by the Pareto frontier and bounded from below by a reference point. Without loss of generality, we assume that the goal is to maximize all objectives. The utility of a new candidate is its hypervolume improvement, which is the volume that is exclusively dominated by the new point in the outcome space corresponding to the candidate (and not by the preexisting Pareto frontier). The hypervolume improvement is typically nonrectangular, but it can be computed efficiently by partitioning the nondominated space into disjoint hyperrectangles.

To generate candidates in parallel, we compute the joint hypervolume improvement across multiple new points by using the inclusion-exclusion principle to compute the volume of the union of the overlapping hyperrectangles. Since we do not know the objective values for a new candidate point a priori, we integrate over our uncertainty around the unobserved objective values provided by our probabilistic surrogate model (typically a Gaussian process), and use the expected hypervolume improvement over the new candidate points as our acquisition function.

Why it matters:

Generating and evaluating designs in parallel is important for fast end-to-end optimization time. For example, when tuning the hyperparameters of machine learning models, one can often evaluate many hyperparameter settings in parallel by distributing evaluations across a cluster of machines. In addition, due to the high evaluation costs, generating high-quality candidates is critical. In many existing methods, the numerical optimization to find the maximizers of the acquisition function is very slow due to the lack of gradient information. Our acquisition function is differentiable, enabling gradient-based optimization and thus faster convergence and better candidates. Moreover, computation can be extremely parallelized: The acquisition function has constant time complexity given infinite cores and can be efficiently computed in many practical scenarios by exploiting GPU acceleration. We empirically show that our acquisition function achieves state-of-the-art optimization performance on a variety of benchmark problems.

In addition, we provide theoretical convergence guarantees on optimizing the acquisition function. Improving sample efficiency is important for speeding up current initiatives spanning from ranking systems, AutoML, materials design to robotics, and opening the door to new optimization problems that require expensive and/or time-consuming evaluations of black-box functions.

Read the full paper:

Differentiable expected hypervolume improvement for parallel multi-objective Bayesian optimization

Check out our open source implementations:

qEHVI is available as part of Ax, our open source library for adaptive experimentation. The underlying algorithm is implemented in BoTorch, and researchers in the area of Bayesian optimization can find implementation details there.

The post Sample-efficient exploration of trade-offs with parallel expected hypervolume improvement appeared first on Facebook Research.

Read More