Building toward more autonomous and proactive cloud technologies with AI

Building toward more autonomous and proactive cloud technologies with AI

Vision of AIOps Research with four quadrants (starting in the top left and proceeding clockwise): Autonomous, Proactive, Manageable, Comprehensive

Cloud Intelligence/AIOps blog series

In the first blog post in this series, Cloud Intelligence/AIOps – Infusing AI into Cloud Computing Systems, we presented a brief overview of Microsoft’s research on Cloud Intelligence/AIOps (AIOps), which innovates AI and machine learning (ML) technologies to help design, build, and operate complex cloud platforms and services effectively and efficiently at scale. As cloud computing platforms have continued to emerge as one of the most fundamental infrastructures of our world, both their scale and complexity have grown considerably. In our previous blog post, we discussed the three major pillars of AIOps research: AI for Systems, AI for Customers, and AI for DevOps, as well as the four major research areas that constitute the AIOps problem space: detection, diagnosis, prediction, and optimization. We also envisioned the AIOps research roadmap as building toward creating more autonomous, proactive, manageable, and comprehensive cloud platforms. 

Vision of AIOps Research

Autonomous Proactive Manageable Comprehensive
Fully automate the operation of cloud systems to minimize system downtime and reduce manual efforts. Predict future cloud status, support proactive decision-making, and prevent bad things from happening. Introduce the notion of tiered autonomy for infusing autonomous routine operations and deep human expertise.  Span AIOps to the full cloud stack for global optimization/management and extend to multi-cloud environments.

Starting with this blog post, we will take a deeper dive into Microsoft’s vision for AIOps research and the ongoing efforts to realize that vision. This blog post will focus on how our researchers leveraged state-of-the-art AIOps research to help make cloud technologies more autonomous and proactive. We will discuss our work to make the cloud more manageable and comprehensive in future blog posts.

Autonomous cloud

Motivation

Cloud platforms require numerous actions and decisions every second to ensure that computing resources are properly managed and failures are promptly addressed. In practice, those actions and decisions are either generated by rule-based systems constructed upon expert knowledge or made manually by experienced engineers. Still, as cloud platforms continue to grow in both scale and complexity, it is apparent that such solutions will be insufficient for the future cloud system. On one hand, rigid rule-based systems, while being knowledge empowered, often involve huge numbers of rules and require frequent maintenance for better coverage and adaptability. Still, in practice, it is often unrealistic to keep such systems up to date as cloud systems expand in both size and complexity, and even more difficult to guarantee consistency and avoid conflicts between all the rules. On the other hand, engineering efforts are very time-consuming, prone to errors, and difficult to scale.

Spotlight: Microsoft Research Podcast

AI Frontiers: The Physics of AI with Sébastien Bubeck

What is intelligence? How does it emerge and how do we measure it? Ashley Llorens and machine learning theorist Sébastian Bubeck discuss accelerating progress in large-scale AI and early experiments with GPT-4.

To break the constraints on the coverage and scalability of the existing solutions and improve the adaptability and manageability of the decision-making systems, cloud platforms must shift toward a more autonomous management paradigm. Instead of relying solely on expert knowledge, we need suitable AI/ML models to fuse operational data and expert knowledge together to enable efficient, reliable, and autonomous management decisions. Still, it will take many research and engineering efforts to overcome various barriers for developing and deploying autonomous solutions to cloud platforms.

Toward an autonomous cloud

In the journey towards an autonomous cloud, there are two major challenges. The first challenge lies in the heterogeneity of cloud data. In practice, cloud platforms deploy a huge number of monitors to collect data in various formats, including telemetry signals, machine-generated log files, and human input from engineers and users. And the patterns and distributions of those data generally exhibit a high degree of diversity and are subjected to changes over time. To ensure that the adopted AIOps solutions can function autonomously in such an environment, it is essential to empower the management system with robust and extendable AI/ML models capable of learning useful information from heterogeneous data sources and drawing right conclusions in various scenarios.

The complex interaction between different components and services presents another major challenge in deploying autonomous solutions. While it can be easy to implement autonomous features for one or a few components/services, how to construct end-to-end systems capable of automatically navigating the complex dependencies in cloud systems presents the true challenge for both researchers and engineers. To address this challenge, it is important to leverage both domain knowledge and data to optimize the automation paths in application scenarios. Researchers and engineers should also implement reliable decision-making algorithms in every decision stage to improve the efficiency and stability of the whole end-to-end decision-making process.

Over the past few years, Microsoft research groups have developed many new models and methods for overcoming those challenges and improving the level of automation in various cloud application scenarios across the AIOps problem spaces. Notable examples include:

  • Detection: Gandalf and ATAD for the early detection of problematic deployments; HALO for hierarchical faulty localization; and Onion for detecting incident-indicating logs.
  • Diagnosis: SPINE and UniParser for log parsing; Logic and Warden for regression and incident diagnosis; and CONAN for batch failure diagnosis.
  • Prediction: TTMPred for predicting time to mitigate incidents; LCS for predicting the low-capacity status in cloud servers; and Eviction Prediction for predicting the eviction of spot virtual machines.
  • Optimization: MLPS for optimizing the reallocation of containers; and RESIN for the management of memory leak in cloud infrastructure.

These solutions not only improve service efficiency and reduce management time with more automatous design, but also result in higher performance and reliability with fewer human errors. As an illustration of our work toward a more autonomous cloud, we will discuss our exploration for supporting automatic safe deployment services below.

Exemplary scenario: Automatic safe deployment

In online services, the continuous integration and continuous deployment (CI/CD) of new patches and builds are critical for the timely delivery of bug fixes and feature updates. Because new deployments with undetected bugs or incompatible issues can cause severe service outages and create significant customer impact, cloud platforms enforce strict safe-deployment procedures before releasing each new deployment to the production environments. Such procedures typically involve multi-stage testing and verification in a sequence of canary environments with increasing scopes. When a deployment-related anomaly is identified in one of these stages, the responsible deployment is rolled back for further diagnosis and fixing. Owing to the challenges of identifying deployment-related anomalies with heterogeneous patterns and managing a huge number of deployments, safe-deployment systems administrated manually can be extremely costly and error prone.

To support automatic and reliable anomaly detection in safe deployment, we proposed a general methodology named ATAD for the effective detection of deployment-related anomalies in time-series signals. This method addresses the challenges of capturing changes with various patterns in time-series signals and the lack of labeled anomaly samples due to the heavy cost of labeling. Specifically, this method combines ideas from both transfer learning and active learning to make good use of the temporal information in the input signal and reduce the number of labeled samples required for model training. Our experiments have shown that ATAD can outperform other state-of-the-art anomaly detection approaches, even with only 1%-5% of labeled data.

At the same time, we collaborated with product teams in Azure to develop and deploy Gandalf, an end-to-end automatic safe deployment system that reduces deployment time and increases the accuracy of detecting bad deployment in Azure. As a data-driven system, Gandalf monitors a large array of information, including performance metrics, failure signals and deployment records. It also detects anomalies in various patterns throughout the entire safe-deployment process. After detecting anomalies, Gandalf applies a vote-veto mechanism to reliably determine whether each detected anomaly is caused by a specific new deployment. Gandalf then automatically decides whether the relevant new deployment should be stopped for a fix or if it’s safe enough to proceed to the next stage. After rolling out in Azure, Gandalf has been effective at helping to capture bad deployments, achieving more than 90% precision and near 100% recall in production over a period of 18 months.

Flow of Automatic Safe Deployment System
Flow of Automatic Safe Deployment System

Proactive cloud

Motivation

Traditional decision-making in the cloud focuses on optimizing immediate resource usage and addressing emerging issues. While this reactive design is not unreasonable in a relatively static system, it can lead to short-sighted decisions in a dynamic environment. In cloud platforms, both the demand and utilization of computing resources are undergoing constant changes, including regular periodical patterns, unexpected spikes, and gradual shifts in both temporal and spatial dimensions. To improve the long-term efficiency and reliability of cloud platforms, it is critical to adopt a proactive design that takes the future status of the system into account in the decision-making process.

A proactive design leverages data-driven models to predict the future status of cloud platforms and enable downstream proactive decision-making. Conceptually, a typical proactive decision-making system consists of two modules: a prediction module and a decision-making module, as displayed in the following diagram.

Cloud Platform Prediction Module

In the prediction module, historical data are collected and processed for training and fine-tuning the prediction model for deployment. The deployed prediction model takes in the online data stream and generates prediction results in real time. In the decision-making module, both the current system status and the predicted system status, along with other information such as domain knowledge and past decision history, is considered for making decisions that balance both present and future benefits.

Toward proactive design

Proactive design, while creating new opportunities for improving the long-term efficiency and reliability of cloud systems, does expose the decision-making process to additional risks. On one hand, thanks to the inherent randomness in the daily operation of cloud platforms, proactive decisions are always subjected to the uncertainty risk from the stochastic elements in both running systems and the environments. On the other hand, the reliability of prediction models adds another layer of risks in making proactive decisions. Therefore, to guarantee the performance of proactive design, engineers must put mechanisms in place to address those risks.

To manage uncertainty risk, engineers need to reformulate the decision-making in proactive design to account for the uncertainty elements. They can often use methodological frameworks, such as prediction+optimization and optimization under chance-constraints, to incorporate uncertainties into the target functions of optimization problems. Well-designed ML/AL models can also learn uncertainty from data for improving proactive decisions against uncertainty elements. As for risks associated with the prediction model, modules for improving data quality, including quality-aware feature engineering, robust data imputation, and data rebalancing, should be applied to reduce prediction errors. Engineers should also make continuous efforts to improve and update the robustness of prediction models. Moreover, safeguarding mechanisms are essential to prevent decisions that may cause harm to the cloud system.

Microsoft’s AIOps research has pioneered the transition from reactive decision-making to proactive decision-making, especially in problem spaces of prediction and optimization. Our efforts not only lead to significant improvement in many application scenarios traditionally supported by reactive decision-making, but also create many new opportunities. Notable proactive design solutions include Narya and Nenya for hardware failure mitigation, UAHS and CAHS for the intelligent virtual machine provisioning, CUC for the predictive scheduling of workloads, and UCaC for bin packing optimization under chance constraints. In the discussion below, we will use hardware failure mitigation as an example to illustrate how proactive design can be applied in cloud scenarios.

Exemplary scenario: Proactive hardware failure mitigation

A key threat to cloud platforms is hardware failure, which can cause interruptions to the hosted services and significantly impact the customer experience. Traditionally, hardware failures are only resolved reactively after the failure occurs, which typically involves temporal interruptions of hosted virtual machines and the repair or replacement of impacted hardware. Such a solution provides limited help in reducing negative customer experiences.

Narya is a proactive disk-failure mitigation service capable of taking mitigation actions before failures occur. Specifically, Narya leverages ML models to predict potential disk failures, and then make decisions accordingly. To control risks associated with uncertainty, Narya evaluates candidate mitigation actions based on the estimated impacts to customers and chooses actions with minimum impact. A feedback loop also exists for collecting follow-up assessments to improve prediction and decision modules.

Hardware failures in cloud systems are often highly interdependent. Therefore, to reduce the impact of predictions errors, Narya introduces a novel dependency-aware model to encode the dependency relationship between nodes to improve the failure prediction model. Narya also implements an adaptive approach that uses A/B testing and bandit modeling to improve the ability to estimate the impacts of actions. Several safeguarding mechanisms in different stages of Narya are also in place to eliminate the chance of making unsafe mitigation actions. Implementation of Narya in Azure’s production environment has reduced the node hardware interruption rate for virtual machines by more than 26%.

Narya's Feedback loop

Our recent work, Nenya, is another example for proactive failure mitigation. Under a reinforcement learning framework, Nenya fuses prediction and decision-making modules into an end-to-end proactive decision-making system. It can weigh both mitigation costs and failure rates to better prioritize cost-effective mitigation actions against uncertainty. Moreover, the traditional failure mitigation method usually suffers from data imbalance issues; cases of failure form only a very small portion of all cases, which have mostly healthy situations. Such data imbalance would introduce bias to both the prediction and decision-making process. To address this problem, Nenya adopts a cascading framework to ensure that mitigation decisions are not made with heavy costs. Experiments with Microsoft 365 data sets on database failure have proved that Nenya can reduce both mitigation costs and database failure rates compared with existing methods.

Future work

As management systems become more automated and proactive, it is important to pay special attention to both the safety of cloud systems and the responsibility to cloud customers. The autonomous and proactive decision system will depend heavily on advanced AI/ML models with little manual effort. How to ensure that the decisions made by those approaches are both safe and responsible is an essential question that future work should answer.

The autonomous and proactive cloud relies on the effective data usage and feedback loop across all stages in the management and operation of cloud platforms. On one hand, high-quality data on the status of cloud systems are needed to enable downstream autonomous and proactive decision-making systems. On the other hand, it is important to monitor and analyze the impact of each decision on the entire cloud platform in order to improve the management system. Such feedback loops can exist simultaneously for many related application scenarios. Therefore, to better support an autonomous and proactive cloud, a unified data plane responsible for the processing and feedback loop can take a central role in the whole system design and should be a key area of investment.

As such, the future of cloud relies not only on adopting more autonomous and proactive solutions, but also on improving the manageability of cloud systems and the comprehensive infusion of AIOps technologies over all stacks of cloud systems. In future blog posts, we will discuss how to work toward a more manageable and comprehensive cloud.

Stay tuned!

The post Building toward more autonomous and proactive cloud technologies with AI appeared first on Microsoft Research.

Read More

AI and the Future of Health

AI and the Future of Health

AI and the future of health - female doctor reviewing tablet

The emergence of increasingly capable large-scale AI models, such as the recently released GPT-4, is one of the most significant advances in computing in decades. These innovations are rapidly transforming every aspect of the value we get from technology, as demonstrated through Microsoft’s integration of GPT-4 into Bing, Edge, Microsoft 365, Power Platform, GitHub, and other offerings. More recently, Nuance has announced DAX Express, which uses a unique combination of conversational, ambient, and generative AI to automatically draft clinical notes after patient visits – helping to reduce care providers’ cognitive burdens and increase the joy of practicing medicine (whilst releasing time for care).

We are at an inflection point for the use of AI in healthcare – one of society’s most critical sectors. The significance of this moment is reflected in Peter Lee’s recent article in the New England Journal of Medicine on the potential future clinical applications of GPT-4. At Microsoft Research’s Health Futures organization, the multidisciplinary group dedicated to discovery in this space, we see this as the continuation of a journey, and a major milestone in the long process of innovating to help address the greatest challenges in healthcare.

In this blog, we will share some of our research team’s work to make healthcare more data-driven, predictive, and precise – ultimately, empowering every person on the planet to live a healthier future.

Enabling precision medicine and connected care

We are today at a unique moment in history where medicine, biology, and technology are converging on a large scale. This presents immense possibilities to revolutionize healthcare and the practice of medicine with the aid of trustworthy AI. While we embrace the potential of AI, we understand that the practice of medicine is an intricate balance of “art” and “science.” We recognize and honor the enduring physician-patient relationship, which is fundamental and timeless. Our diverse team comprises researchers, scientists, engineers, biotechnologists, designers, social scientists, strategists, healthcare experts, and medical professionals who collaborate globally and inclusively to reimagine and transform the lives of the patients and public we serve.

As we consider how technologies have shaped the practice of medicine over the centuries, from the individual to the ecosystem level, we are reminded that no technology exists in a vacuum. Our core understanding of biological systems is rapidly evolving, and with it, our understanding of what technologies are relevant and useful. Simultaneously, the use of technology across the health and life science industries, and the way healthcare is delivered, are also rapidly changing – reshaping our traditional healthcare delivery model from one of diagnosis and treatment, to one that prioritizes prevention and precise individualized care.

Spotlight: On-Demand EVENT

Microsoft Research Summit 2022

On-Demand
Watch now to learn about some of the most pressing questions facing our research community and listen in on conversations with 120+ researchers around how to ensure new technologies have the broadest possible benefit for humanity.

Recent advancements in machine learning and AI have fueled computational technologies that allow us to aggregate complex inputs from multiple data sources, with the potential to derive rich insights that rapidly expand our knowledge base and drive deeper discovery and faster innovation. At the same time, it remains an open question how to best use and regulate these technologies in real-world settings and at scale across healthcare and the life sciences. Nonetheless, we believe that we are on a path to delivering on the goal of precision medicine – a change in clinical practice which will be enabled by precision diagnostics, precision therapeutics, and connected care technologies.

To achieve this goal, we seek to collaborate with health and life sciences organizations with a similar appetite for transformation, complementary expertise, and a commitment to propel the change required. We are also engaged with the broader community in pursuing responsible and ethical use of AI in healthcare. Our diverse team has been successful in bridging the gap between the fields of medicine, biology and chemistry on one hand, and computing on the other. We act as “translators” between these fields, and through a process of ongoing collaboration and feedback, we have discovered new challenges and innovative solutions.

Below are some examples of our collaborative research approach:

Exploring diagnostic tools from new modalities

Multimodal foundation models for medicine: an example from radiology

The field of biomedicine involves a great deal of multimodal data, such as radiology images and text-based reports. Interpreting this data at scale is essential for improving care and accelerating research. Radiology reports often compare current and prior images to track changes in findings over time. This is crucial for decision making, but most AI models do not take into account this temporal structure. We are exploring a novel self-supervised framework that pre-trains vision-language models using pairs of reports and sequences of images. This includes handling missing or misaligned images and exploiting temporal information to learn more efficiently. Our approach, called BioViL-T, achieves state-of-the-art results on several downstream tasks, such as report generation, and interpreting disease progression by focusing on relevant image regions across time. BioViL-T is part of ongoing collaboration with our colleagues at Nuance to develop scalable and flexible AI solutions for radiology that can empower care providers and augment existing workflows.

Project InnerEye: Democratizing Medical Imaging AI

Project InnerEye is a research project that is exploring ways in which machine learning has the potential to assist clinicians in planning radiotherapy treatments so that they can spend more time with their patients. Project InnerEye has been working closely with the University of Cambridge and Cambridge University Hospitals NHS Foundation Trust to make progress on this problem through a deep research collaboration. To make our research as accessible as possible, we released the InnerEye Deep Learning Toolkit as open-source software. Cambridge University Hospitals NHS Foundation Trust and University Hospitals Birmingham NHS Trust led an NHS AI in Health and Care Award to evaluate how this technology could potentially save clinicians’ time, reduce the time between the scan and commencing treatment, and scale this to more NHS Trusts. Any clinical use of the InnerEye machine learning models remains subject to regulatory approval.

Immunomics: Decoding the Immune System to Diagnose Disease

The human immune system is an astonishing diagnostic engine, continuously adapting itself to detect any signal of disease in the body. Essentially, the state of the immune system tells a story about virtually everything affecting a person’s health. What if we could “read” this story? Our scientific understanding of human health would be fundamentally advanced. More importantly, this would provide a platform for a new generation of precise medical diagnostics and treatment options. We are partnering with Adaptive Biotechnologies to develop the machine learning and biotechnology tools that will allow us to realize this dream.

Fundamental advances towards new medicines and therapeutics

Protein Engineering

Several research groups are delving into the potential of machine learning to enhance our comprehension of proteins and their pivotal role in various biological processes. We are also using AI to design new proteins for therapeutics and industry. By applying machine learning to extract patterns from databases of sequences, structures, and properties, Microsoft hopes to train models that can make protein engineering by directed evolution more efficient, and directly generate proteins that will perform desired functions. The ability to generate computationally distinct yet viable protein structures holds tremendous promise for uncovering novel biological insights and developing targeted therapies for previously untreatable illnesses.

Investigating the Cancer Microenvironment through Ex Vivo Research

Microsoft is working on ways to identify specific characteristics of cancer cells and their surrounding microenvironments that might be targeted for treatment. By studying how cancer cells and their surroundings interact with each other, the team aims to create a more precise approach to cancer treatment that takes into account both genetic and non-genetic factors.

Accelerating biomedical research

Microsoft and the Broad Institute – combining their expertise in genomics, disease research, cloud computing and data analytics – are developing an open-source platform to accelerate biomedical research using scalable analytical tools. The platform is built on top of the Broad Institute’s Terra platform, providing a user-friendly interface for accessing and analyzing genomic data. Leveraging Microsoft’s Azure cloud computing services, the platform will enable secure storage and analysis of large datasets. Additionally, the platform will incorporate machine learning and other advanced analytical tools to help researchers gain insights into complex diseases and develop new treatments.

Advancing clinical interpretation and exploration through multimodal language models

In the quest for precision medicine and accelerating biomedical discovery, Microsoft is committed to advancing the state of the art in biomedical natural language processing (NLP). A crucial factor in future-facing, data-driven health systems is the accessibility and interpretability of multimodal health information. To meet this need, Microsoft has laid a solid foundation across multiple modalities in biomedical NLP building on our deep research assets in deep learning and biomedical machine reading.

One significant achievement is our development and application of large language models (LLMs) in biomedicine. Microsoft was among the first to create and assess the applicability of LLMs, such as PubMedBERT and BioGPT, which are highly effective in structuring biomedical data. However, to address the inherent limitations of LLMs, Microsoft is developing methods to teach them to fact-check themselves and provide fine-grained provenance. Additionally, Microsoft is exploring ways to facilitate efficient verification with humans in the loop.

Besides text, other modalities such as radiology images, digital pathology slides, and genomics contain valuable health information. Microsoft is developing multimodal learning and fusion methods that incorporate these modalities. These methods include predicting disease progression and drug response, with the ultimate goal of delivering safe and high-quality healthcare.

Observational data in biomedicine is often plagued by confounders, making it challenging to draw causal relationships. To overcome this obstacle, Microsoft is developing advanced causal methods that correct implicit biases and scale biomedical discovery. These methods will allow Microsoft to leverage real-world evidence and contribute to the creation of more effective healthcare delivery systems. For our end-to-end biomedical applications, we have made exciting progress in deep collaborations with Microsoft partners such as The Jackson Laboratory and Providence St. Joseph Health.

Empowering everyone to live a healthier future

Microsoft has pursued interdisciplinary research that enables people to reach the full potential of their health for many years, but we’ve never been more excited about the possibilities than we are today. The latest developments in AI have inspired us to accelerate our efforts across these and many other projects, and we look forward to even more innovation and collaboration in this new era.

The post AI and the Future of Health appeared first on Microsoft Research.

Read More

AI and the Future of Health

AI and the Future of Health

AI and the future of health - female doctor reviewing tablet

The emergence of increasingly capable large-scale AI models, such as the recently released GPT-4, is one of the most significant advances in computing in decades. These innovations are rapidly transforming every aspect of the value we get from technology, as demonstrated through Microsoft’s integration of GPT-4 into Bing, Edge, Microsoft 365, Power Platform, GitHub, and other offerings. More recently, Nuance has announced DAX Express, which uses a unique combination of conversational, ambient, and generative AI to automatically draft clinical notes after patient visits – helping to reduce care providers’ cognitive burdens and increase the joy of practicing medicine (whilst releasing time for care).

We are at an inflection point for the use of AI in healthcare – one of society’s most critical sectors. The significance of this moment is reflected in Peter Lee’s recent article in the New England Journal of Medicine on the potential future clinical applications of GPT-4. At Microsoft Research’s Health Futures organization, the multidisciplinary group dedicated to discovery in this space, we see this as the continuation of a journey, and a major milestone in the long process of innovating to help address the greatest challenges in healthcare.

In this blog, we will share some of our research team’s work to make healthcare more data-driven, predictive, and precise – ultimately, empowering every person on the planet to live a healthier future.

Enabling precision medicine and connected care

We are today at a unique moment in history where medicine, biology, and technology are converging on a large scale. This presents immense possibilities to revolutionize healthcare and the practice of medicine with the aid of trustworthy AI. While we embrace the potential of AI, we understand that the practice of medicine is an intricate balance of “art” and “science.” We recognize and honor the enduring physician-patient relationship, which is fundamental and timeless. Our diverse team comprises researchers, scientists, engineers, biotechnologists, designers, social scientists, strategists, healthcare experts, and medical professionals who collaborate globally and inclusively to reimagine and transform the lives of the patients and public we serve.

As we consider how technologies have shaped the practice of medicine over the centuries, from the individual to the ecosystem level, we are reminded that no technology exists in a vacuum. Our core understanding of biological systems is rapidly evolving, and with it, our understanding of what technologies are relevant and useful. Simultaneously, the use of technology across the health and life science industries, and the way healthcare is delivered, are also rapidly changing – reshaping our traditional healthcare delivery model from one of diagnosis and treatment, to one that prioritizes prevention and precise individualized care.

Spotlight: On-demand video

AI Explainer: Foundation models ​and the next era of AI

Explore how the transformer architecture, larger models and more data, and in-context learning have helped advance AI from perception to creation.

Recent advancements in machine learning and AI have fueled computational technologies that allow us to aggregate complex inputs from multiple data sources, with the potential to derive rich insights that rapidly expand our knowledge base and drive deeper discovery and faster innovation. At the same time, it remains an open question how to best use and regulate these technologies in real-world settings and at scale across healthcare and the life sciences. Nonetheless, we believe that we are on a path to delivering on the goal of precision medicine – a change in clinical practice which will be enabled by precision diagnostics, precision therapeutics, and connected care technologies.

To achieve this goal, we seek to collaborate with health and life sciences organizations with a similar appetite for transformation, complementary expertise, and a commitment to propel the change required. We are also engaged with the broader community in pursuing responsible and ethical use of AI in healthcare. Our diverse team has been successful in bridging the gap between the fields of medicine, biology and chemistry on one hand, and computing on the other. We act as “translators” between these fields, and through a process of ongoing collaboration and feedback, we have discovered new challenges and innovative solutions.

Below are some examples of our collaborative research approach:

Exploring diagnostic tools from new modalities

Multimodal foundation models for medicine: an example from radiology

The field of biomedicine involves a great deal of multimodal data, such as radiology images and text-based reports. Interpreting this data at scale is essential for improving care and accelerating research. Radiology reports often compare current and prior images to track changes in findings over time. This is crucial for decision making, but most AI models do not take into account this temporal structure. We are exploring a novel self-supervised framework that pre-trains vision-language models using pairs of reports and sequences of images. This includes handling missing or misaligned images and exploiting temporal information to learn more efficiently. Our approach, called BioViL-T, achieves state-of-the-art results on several downstream tasks, such as report generation, and interpreting disease progression by focusing on relevant image regions across time. BioViL-T is part of ongoing collaboration with our colleagues at Nuance to develop scalable and flexible AI solutions for radiology that can empower care providers and augment existing workflows.

Project InnerEye: Democratizing Medical Imaging AI

Project InnerEye is a research project that is exploring ways in which machine learning has the potential to assist clinicians in planning radiotherapy treatments so that they can spend more time with their patients. Project InnerEye has been working closely with the University of Cambridge and Cambridge University Hospitals NHS Foundation Trust to make progress on this problem through a deep research collaboration. To make our research as accessible as possible, we released the InnerEye Deep Learning Toolkit as open-source software. Cambridge University Hospitals NHS Foundation Trust and University Hospitals Birmingham NHS Trust led an NHS AI in Health and Care Award to evaluate how this technology could potentially save clinicians’ time, reduce the time between the scan and commencing treatment, and scale this to more NHS Trusts. Any clinical use of the InnerEye machine learning models remains subject to regulatory approval.

Immunomics: Decoding the Immune System to Diagnose Disease

The human immune system is an astonishing diagnostic engine, continuously adapting itself to detect any signal of disease in the body. Essentially, the state of the immune system tells a story about virtually everything affecting a person’s health. What if we could “read” this story? Our scientific understanding of human health would be fundamentally advanced. More importantly, this would provide a platform for a new generation of precise medical diagnostics and treatment options. We are partnering with Adaptive Biotechnologies to develop the machine learning and biotechnology tools that will allow us to realize this dream.

Fundamental advances towards new medicines and therapeutics

Protein Engineering

Several research groups are delving into the potential of machine learning to enhance our comprehension of proteins and their pivotal role in various biological processes. We are also using AI to design new proteins for therapeutics and industry. By applying machine learning to extract patterns from databases of sequences, structures, and properties, Microsoft hopes to train models that can make protein engineering by directed evolution more efficient, and directly generate proteins that will perform desired functions. The ability to generate computationally distinct yet viable protein structures holds tremendous promise for uncovering novel biological insights and developing targeted therapies for previously untreatable illnesses.

Investigating the Cancer Microenvironment through Ex Vivo Research

Microsoft is working on ways to identify specific characteristics of cancer cells and their surrounding microenvironments that might be targeted for treatment. By studying how cancer cells and their surroundings interact with each other, the team aims to create a more precise approach to cancer treatment that takes into account both genetic and non-genetic factors.

Accelerating biomedical research

Microsoft and the Broad Institute – combining their expertise in genomics, disease research, cloud computing and data analytics – are developing an open-source platform to accelerate biomedical research using scalable analytical tools. The platform is built on top of the Broad Institute’s Terra platform, providing a user-friendly interface for accessing and analyzing genomic data. Leveraging Microsoft’s Azure cloud computing services, the platform will enable secure storage and analysis of large datasets. Additionally, the platform will incorporate machine learning and other advanced analytical tools to help researchers gain insights into complex diseases and develop new treatments.

Advancing clinical interpretation and exploration through multimodal language models

In the quest for precision medicine and accelerating biomedical discovery, Microsoft is committed to advancing the state of the art in biomedical natural language processing (NLP). A crucial factor in future-facing, data-driven health systems is the accessibility and interpretability of multimodal health information. To meet this need, Microsoft has laid a solid foundation across multiple modalities in biomedical NLP building on our deep research assets in deep learning and biomedical machine reading.

One significant achievement is our development and application of large language models (LLMs) in biomedicine. Microsoft was among the first to create and assess the applicability of LLMs, such as PubMedBERT and BioGPT, which are highly effective in structuring biomedical data. However, to address the inherent limitations of LLMs, Microsoft is developing methods to teach them to fact-check themselves and provide fine-grained provenance. Additionally, Microsoft is exploring ways to facilitate efficient verification with humans in the loop.

Besides text, other modalities such as radiology images, digital pathology slides, and genomics contain valuable health information. Microsoft is developing multimodal learning and fusion methods that incorporate these modalities. These methods include predicting disease progression and drug response, with the ultimate goal of delivering safe and high-quality healthcare.

Observational data in biomedicine is often plagued by confounders, making it challenging to draw causal relationships. To overcome this obstacle, Microsoft is developing advanced causal methods that correct implicit biases and scale biomedical discovery. These methods will allow Microsoft to leverage real-world evidence and contribute to the creation of more effective healthcare delivery systems. For our end-to-end biomedical applications, we have made exciting progress in deep collaborations with Microsoft partners such as The Jackson Laboratory and Providence St. Joseph Health.

Empowering everyone to live a healthier future

Microsoft has pursued interdisciplinary research that enables people to reach the full potential of their health for many years, but we’ve never been more excited about the possibilities than we are today. The latest developments in AI have inspired us to accelerate our efforts across these and many other projects, and we look forward to even more innovation and collaboration in this new era.

The post AI and the Future of Health appeared first on Microsoft Research.

Read More

AI Frontiers: AI for health and the future of research with Peter Lee

AI Frontiers: AI for health and the future of research with Peter Lee

Peter Lee wearing glasses and smiling at the camera with the Microsoft Research Podcast logo to the left

Episode 137 | March 30, 2023

Powerful new large-scale AI models like GPT-4 are showing dramatic improvements in reasoning, problem-solving, and language capabilities. This marks a phase change for artificial intelligence—and a signal of accelerating progress to come.

In this new Microsoft Research Podcast series, AI scientist and engineer Ashley Llorens hosts conversations with his collaborators and colleagues about what these new models—and the models that will come next—mean for our approach to creating, understanding, and deploying AI, its applications in areas such as health care and education, and its potential to benefit humanity.

The second episode features Peter Lee, head of Microsoft Research. Lee was among a group within Microsoft to have early access to GPT-4 for evaluation and experimentation. Here, he applies his philosophy of tackling research from what will be inevitably true at a future point in time to this current moment. He also explores the differences that may make integrating today’s AI advancements into health care more attainable, a topic he expands on in the soon-to-be-released book The AI Revolution in Medicine: GPT-4 and Beyond and the New England Journal of Medicine article “Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine.”

Transcript

[MUSIC PLAYS]

Ashley Llorens: I’m Ashley Llorens with Microsoft Research. I’ve spent the last 20 years working in AI and machine learning. But I’ve never felt more fortunate to work in the field than at this moment. Just this month, March 2023, OpenAI announced GPT-4, a powerful new large-scale AI model with dramatic improvements in reasoning, problem-solving, and much more. This model and the models that will come after it represent a phase change in the decades-long pursuit of artificial intelligence.

In this podcast series, I’ll share conversations with fellow researchers about our initial impressions of GPT-4, the nature of intelligence, and ultimately, how innovations like these can have the greatest benefit for humanity.


Today we’re sitting down with Peter Lee, head of Microsoft Research. Peter and a number of MSR colleagues, including myself, have had the privilege of working to evaluate and experiment with GPT-4 and support its integration into Microsoft products.

Peter has also deeply explored the potential application of GPT-4 in health care, where its powerful reasoning and language capabilities could make it a useful copilot for practitioners in patient interaction, managing paperwork, and many other tasks.

Welcome to AI Frontiers.

[MUSIC FADES]

I’m going to jump right in here, Peter. So you and I have known each other now for a few years. And one of the values I believe that you and I share is around societal impact and in particular creating spaces and opportunities where science and technology research can have the maximum benefit to society. In fact, this shared value is one of the reasons I found coming to Redmond to work with you an exciting prospect

Now, in preparing for this episode, I listened again to your discussion with our colleague Kevin Scott on his podcast around the idea of research in context. And the world’s changed a little bit since then, and I just wonder how that thought of research in context kind of finds you in the current moment.

Peter Lee: It’s such an important question and, you know, research in context, I think the way I explained it before is about inevitable futures. You try to think about, you know, what will definitely be true about the world at some point in the future. It might be a future just one year from now or maybe 30 years from now. But if you think about that, you know what’s definitely going to be true about the world and then try to work backwards from there.

And I think the example I gave in that podcast with Kevin was, well, 10 years from now, we feel very confident as scientists that cancer will be a largely solved problem. But aging demographics on multiple continents, particularly North America but also Europe and Asia, is going to give huge rise to age-related neurological disease. And so knowing that, that’s a very different world than today, because today most of medical research funding is focused on cancer research, not on neurological disease.

And so what are the implications of that change? And what does that tell us about what kinds of research we should be doing? The research is still very future oriented. You’re looking ahead a decade or more, but it’s situated in the real world. Research in context. And so now if we think about inevitable futures, well, it’s looking increasingly inevitable that very general forms of artificial intelligence at or potentially beyond human intelligence are inevitable. And maybe very quickly, you know, like in much, much less than 10 years, maybe much less than five years.

And so what are the implications for research and the kinds of research questions and problems we should be thinking about and working on today? That just seems so much more disruptive, so much more profound, and so much more challenging for all of us than the cancer and neurological disease thing, as big as those are.

I was reflecting a little bit through my research career, and I realized I’ve lived through one aspect of this disruption five times before. The first time was when I was still an assistant professor in the late 1980s at Carnegie Mellon University, and, uh, Carnegie Mellon University, as well as several other top universities’, uh, computer science departments, had a lot of, of really fantastic research on 3D computer graphics.

It was really a big deal. And so ideas like ray tracing, radiosity, uh, silicon architectures for accelerating these things were being invented at universities, and there was a big academic conference called SIGGRAPH that would draw hundreds of professors and graduate students, uh, to present their results. And then by the early 1990s, startup companies started taking these research ideas and founding companies to try to make 3D computer graphics real. One notable company that got founded in 1993 was NVIDIA.

You know, over the course of the 1990s, this ended up being a triumph of fundamental computer science research, now to the point where today you literally feel naked and vulnerable if you don’t have a GPU in your pocket. Like if you leave your home, you know, without your mobile phone, uh, it feels bad.

And so what happened is there’s a triumph of computer science research, let’s say in this case in 3D computer graphics, that ultimately resulted in a fundamental infrastructure for life, at least in the developed world. In that transition, which is just a positive outcome of research, it also had some disruptive effect on research.

You know, in 1991, when Microsoft Research was founded, one of the founding research groups was a 3D computer graphics research group that was amongst, uh, the first three research groups for MSR. At Carnegie Mellon University and at Microsoft Research, we don’t have 3D computer graphics research anymore. There had to be a transition and a disruptive impact on researchers who had been building their careers on this. Even with the triumph of things, when you’re talking about the scale of infrastructure for human life, it moves out of the realm completely of—of fundamental research. And that’s happened with compiler design. That was my, uh, area of research. It’s happened with wireless networking; it’s happened with hypertext and, you know, hyperlinked document research, with operating systems research, and all of these things, you know, have become things that that you depend on all day, every day as you go about your life. And they all represent just majestic achievements of computer science research. We are now, I believe, right in the midst of that transition for large language models.

Llorens: I wonder if you see this particular transition, though, as qualitatively different in that those other technologies are ones that blend into the background. You take them for granted. You mentioned that I leave the home every day with a GPU in my pocket, but I don’t think of it that way. Then again, maybe I have some kind of personification of my phone that I’m not thinking of. But certainly, with language models, it’s a foreground effect. And I wonder if, if you see something different there.

Lee: You know, it’s such a good question, and I don’t know the answer to that, but I agree it feels different. I think in terms of the impact on research labs, on academia, on the researchers themselves who have been building careers in this space, the effects might not be that different. But for us, as the consumers and users of this technology, it certainly does feel different. There’s something about these large language models that seems more profound than, let’s say, the movement of pinch-to-zoom UX design, you know, out of academic research labs into, into our pockets. This might get into this big question about, I think, the hardwiring in our brains that when we interact with these large language models, even though we know consciously they aren’t, you know, sentient beings with feelings and emotions, our hardwiring forces uswe can’t resist feeling that way.

I think it’s a, it’s a deep sort of thing that we evolved, you know, in the same way that when we look at an optical illusion, we can be told rationally that it’s an optical illusion, but the hardwiring in our kind of visual perception, just no amount of willpower can overcome, to see past the optical illusion.

And similarly, I think there’s a similar hardwiring that, you know, we are drawn to anthropomorphize these systems, and that does seem to put it into the foreground, as you’ve—as you’ve put it. Yeah, I think for our human experience and our lives, it does seem like it’ll feel—your term is a good one—it’ll feel more in the foreground.

Llorens: Let’s pin some of these, uh, concepts because I think we’ll come back to them. I’d like to turn our attention now to the health aspect of your current endeavors and your path at Microsoft.

You’ve been eloquent about the many challenges around translating frontier AI technologies into the health system and into the health care space in general. In our interview, [LAUGHS] actually, um, when I came here to Redmond, you described the grueling work that would be needed there. I’d like to talk a little bit about those challenges in the context of the emergent capabilities that we’re seeing in GPT-4 and the wave of large-scale AI models that we’re seeing. What’s different about this wave of AI technologies relative to those systemic challenges in, in the health space?

Lee: Yeah, and I think to be really correct and precise about it, we don’t know that GPT-4 will be the difference maker. That still has to be proven. I think it really will, but it, it has to actually happen because we’ve been here before where there’s been so much optimism about how technology can really help health care and in advanced medicine. And we’ve just been disappointed over and over again. You know, I think that those challenges stem from maybe a little bit of overoptimism or what I call irrational exuberance. As techies, we look at some of the problems in health care and we think, oh, we can solve those. You know, we look at the challenges of reading radiological images and measuring tumor growth, or we look at, uh, the problem of, uh, ranking differential diagnosis options or therapeutic options, or we look at the problem of extracting billing codes out of an unstructured medical note. These are all problems that we think we know how to solve in computer science. And then in the medical community, they look at the technology industry and computer science research, and they’re dazzled by all of the snazzy, impressive-looking AI and machine learning and cloud computing that we have. And so there is this incredible optimism coming from both sides that ends up feeding into overoptimism because the actual challenges of integrating technology into the workflow of health care and medicine, of making sure that it’s safe and sort of getting that workflow altered to really harness the best of the technology capabilities that we have now, ends up being really, really difficult.

Furthermore, when we get into actual application of medicine, so that’s in diagnosis and in developing therapeutic pathways, they happen in a really fluid environment, which in a machine learning context involves a lot of confounding factors. And those confounding factors ended up being really important because medicine today is founded on precise understanding of causes and effects, of causal reasoning.

Our best tools right now in machine learning are essentially correlation machines. And as the old saying goes, correlation is not causation. And so if you take a classic example like does smoking cause cancer, it’s very important to take account of the confounding effects and know for certain that there’s a cause-and-effect relationship there. And so there’s always been those sorts of issues.

When we’re talking about GPT-4, I remember I was sitting next to Eric Horvitz the first time it got exposed to me. So Greg Brockman from OpenAI, who’s amazing, and actually his whole team at OpenAI is just spectacularly good. And, uh, Greg was giving a demonstration of an early version of GPT-4 that was codenamed Davinci 3 at the time, and he was showing, as part of the demo, the ability of the system to solve biology problems from the AP biology exam.

And it, you know, gets, I think, a score of 5, the maximum score of 5, on that exam. Of course, the AP exam is this multiple-choice exam, so it was making those multiple choices. But then Greg was able to ask the system to explain itself. How did you come up with that answer? And it would explain, in natural language, its answer. And what jumped out at me was in its explanation, it was using the word “because.”

“Well, I think the answer is C, because, you know, when you look at this aspect, uh, statement of the problem, this causes something else to happen, then that causes some other biological thing to happen, and therefore we can rule out answers A and B and E, and then because of this other factor, we can rule out answer D, and all the causes and effects line up.”

And so I turned immediately to Eric Horvitz, who was sitting next to me, and I said, “Eric, where is that cause-and-effect analysis coming from? This is just a large language model. This should be impossible.” And Eric just looked at me, and he just shook his head and he said, “I have no idea.” And it was just this mysterious thing.

And so that is just one of a hundred aspects of GPT-4 that we’ve been studying over the past now more than half year that seemed to overcome some of the things that have been blockers to the integration of machine intelligence in health care and medicine, like the ability to actually reason and explain its reasoning in these medical scenarios, in medical terms, and that plus its generality just seems to give us just a lot more optimism that this could finally be the very significant difference maker.

The other aspect is that we don’t have to focus squarely on that clinical application. We’ve discovered that, wow, this thing is really good at filling out forms and reducing paperwork burden. It knows how to apply for prior authorization for health care reimbursement. That’s part of the crushing kind of administrative and clerical burden that doctors are under right now.

This thing just seems to be great at that. And that doesn’t really impinge on life-or-death diagnostic or therapeutic decisions. But they happen in the back office. And those back-office functions, again, are bread and butter for Microsoft’s businesses. We know how to interact and sell and deploy technologies there, and so working with OpenAI, it seems like, again, there’s just a ton of reason why we think that it could really make a big difference.

Llorens: Every new technology has opportunities and risks associated with it. This new class of AI models and systems, you know, they’re fundamentally different because they’re not learning, uh, specialized function mapping. There were many open problems on even that kind of machine learning in various applications, and there still are, but instead, it’s—it’s got this general-purpose kind of quality to it. How do you see both the opportunities and the risks associated with this kind of general-purpose technology in the context of, of health care, for example?

Lee: Well, I—I think one thing that has made an unfortunate amount of social media and public media attention are those times when the system hallucinates or goes off the rails. So hallucination is actually a term which isn’t a very nice term. It really, for listeners who aren’t familiar with the idea, is the problem that GPT-4 and other similar systems can have sometimes where they, uh, make stuff up, fabricate, uh, information.

You know, over the many months now that we’ve been working on this, uh, we’ve witnessed the steady evolution of GPT-4, and it hallucinates less and less. But what we’ve also come to understand is that it seems that that tendency is also related to GPT-4’s ability to be creative, to make informed, educated guesses, to engage in intelligent speculation.

And if you think about the practice of medicine, in many situations, that’s what doctors and nurses are doing. And so there’s sort of a fine line here in the desire to make sure that this thing doesn’t make mistakes versus its ability to operate in problem-solving scenarios that—the way I would put it is—for the first time, we have an AI system where you can ask it questions that don’t have any known answer. It turns out that that’s incredibly useful. But now the question is—and the risk is—can you trust the answers that you get? One of the things that happens is GPT-4 has some limitations, particularly that can be exposed fairly easily in mathematics. It seems to be very good at, say, differential equations and calculus at a basic level, but I have found that it makes some strange and elementary errors in basic statistics.

There’s an example from my colleague at Harvard Medical School, Zak Kohane, uh, where he uses standard Pearson correlation kinds of math problems, and it seems to consistently forget to square a term and—and make a mistake. And then what is interesting is when you point out the mistake to GPT-4, its first impulse sometimes is to say, “Uh, no, I didn’t make a mistake; you made a mistake.” Now that tendency to kind of accuse the user of making the mistake, it doesn’t happen so much anymore as the system has improved, but we still in many medical scenarios where there’s this kind of problem-solving have gotten in the habit of having a second instance of GPT-4 look over the work of the first one because it seems to be less attached to its own answers that way and it spots errors very readily.

So that whole story is a long-winded way of saying that there are risks because we’re asking this AI system for the first time to tackle problems that require some speculation, require some guessing, and may not have precise answers. That’s what medicine is at core. Now the question is to what extent can we trust the thing, but also, what are the techniques for making sure that the answers are as good as possible. So one technique that we’ve fallen into the habit of is having a second instance. And, by the way, that second instance ends up really being useful for detecting errors made by the human doctor, as well, because that second instance doesn’t care whether the answers were produced by man or machine. And so that ends up being important. But now moving away from that, there are bigger questions that—as you and I have discussed a lot, Ashley, at work—pertain to this phrase responsible AI, uh, which has been a research area in computer science research. And that term, I think you and I have discussed, doesn’t feel apt anymore.

I don’t know if it should be called societal AI or something like that. And I know you have opinions about this. You know, it’s not just errors and correctness. It’s not just the possibility that these things might be goaded into saying something harmful or promoting misinformation, but there are bigger issues about regulation; about job displacements, perhaps at societal scale; about new digital divides; about haves and have-nots with respect to access to these things. And so there are now these bigger looming issues that pertain to the idea of risks of these things, and they affect medicine and health care directly, as well.

Llorens: Certainly, this matter of trust is multifaceted. You know, there’s trust at the level of institutions, and then there’s trust at the level of individual human beings that need to make decisions, tough decisions, you know—where, when, and if to use an AI technology in the context of a workflow. What do you see in terms of health care professionals making those kinds of decisions? Any barriers to adoption that you would see at the level of those kinds of independent decisions? And what’s the way forward there?

Lee: That’s the crucial question of today right now. There is a lot of discussion about to what extent and how should, for medical uses, how should GPT-4 and its ilk be regulated. Let’s just take the United States context, but there are similar discussions in the UK, Europe, Brazil, Asia, China, and so on.

In the United States, there’s a regulatory agency, the Food and Drug Administration, the FDA, and they actually have authority to regulate medical devices. And there’s a category of medical devices called SaMDs, software as a medical device, and the big discussion really over the past, I would say, four or five years has been how to regulate SaMDs that are based on machine learning, or AI. Steadily, there’s been, uh, more and more approval by the FDA of medical devices that use machine learning, and I think the FDA and the United States has been getting closer and closer to actually having a fairly, uh, solid framework for validating ML-based medical devices for clinical use. As far as we’ve been able to tell, those emerging frameworks don’t apply at all to GPT-4. The methods for doing the clinical validation do not make sense and don’t work for GPT-4.

And so a first question to ask is—even before you get to, should this thing be regulated?—is if you were to regulate it, how on earth would you do it. Uh, because it’s basically putting a doctor’s brain in a box. And so, Ashley, if I put a doctor—let’s take our colleague Jim Weinstein, you know, a great spine surgeon. If we put his brain in a box and I give it to you and ask you, “Please validate this thing,” how on earth do you think about that? What’s the framework for that? And so my conclusion in all of this—it’s possible that regulators will react and impose some rules, but I think it would be a mistake, because I think my fundamental conclusion of all this is that at least for the time being, the rules of application engagement have to apply to human beings, not to the machines.

Now the question is what should doctors and nurses and, you know, receptionists and insurance adjusters, and all of the people involved, you know, hospital administrators, what are their guidelines and what is and isn’t appropriate use of these things. And I think that those decisions are not a matter for the regulators, but that the medical community itself should take ownership of the development of those guidelines and those rules of engagement and encourage, and if necessary, find ways to impose—maybe through medical licensing and other certification—adherence to those things.

That’s where we’re at today. Someday in the future—and we would encourage and in fact we are actively encouraging universities to create research projects that would try to explore frameworks for clinical validation of a brain in a box, and if those research projects bear fruit, then they might end up informing and creating a foundation for regulators like the FDA to have a new form of medical device. I don’t know what you would call it, AI MD, maybe, where you could actually relieve some of the burden from human beings and instead have a version of some sense of a validated, certified brain in a box. But until we get there, you know, I think it’s—it’s really on human beings to kind of develop and monitor and enforce their own behavior.

Llorens: I think some of these questions around test and evaluation, around assurance, are at least as interesting as, [LAUGHS] you knowdoing research in that space is going to be at least as interesting as—as creating the models themselves, for sure.

Lee: Yes. By the way, I want to take this opportunity just to commend Sam Altman and the OpenAI folks. I feel like, uh, you and I and other colleagues here at Microsoft Research, we’re in an extremely privileged position to get very early access, specifically to try to flesh out and get some early understanding of the implications for really critical areas of human development like health and medicine, education, and so on.

The instigator was really Sam Altman and crew at OpenAI. They saw the need for this, and they really engaged with us at Microsoft Research to kind of dive deep, and they gave us a lot of latitude to kind of explore deeply in as kind of honest and unvarnished a way as possible, and I think it’s important, and I’m hoping that as we share this with the world, that—that there can be an informed discussion and debate about things. I think it would be a mistake for, say, regulators or anyone to overreact at this point. This needs study. It needs debate. It needs kind of careful consideration, uh, just to understand what we’re dealing with here.

Llorens: Yeah, what a—what a privilege it’s been to be anywhere near the epicenter of these—of these advancements. Just briefly back to this idea of a brain in a box. One of the super interesting aspects of that is it’s not a human brain, right? So some of what we might intuitively think about when you say brain in the box doesn’t really apply, and it gets back to this notion of test and evaluation in that if I give a licensing exam, say, to the brain in the box and it passes it with flying colors, had that been a human, there would have been other things about the intelligence of that entity that are underlying assumptions that are not explicitly tested in that test that then those combined with the knowledge required for the certification makes you fit to do some job. It’s just interesting; there are ways in which the brain that we can currently conceive of as being an AI in that box underperforms human intelligence in some ways and overperforms it in others.

Lee: Right.

Llorens: Verifying and assuring that brain in that—that box I think is going to be just a really interesting challenge.

Lee: Yeah. Let me acknowledge that there are probably going to be a lot of listeners to this podcast who will really object to the idea of “brain in the box” because it crosses the line of kind of anthropomorphizing these systems. And I acknowledge that, that there’s probably a better way to talk about this than doing that. But I’m intentionally being overdramatic by using that phrase just to drive home the point, what a different beast this is when we’re talking about something like clinical validation. It’s not the kind of narrow AI—it’s not like a machine learning system that gives you a precise signature of a T-cell receptor repertoire. There’s a single right answer to those things. In fact, you can freeze the model weights in that machine learning system as we’ve done collaboratively with Adaptive Biotechnologies in order to get an FDA approval as a medical device, as an SaMD. There’s nothing that is—this is so much more stochastic. The model weights matter, but they’re not the fundamental thing.

There’s an alignment of a self-attention network that is in constant evolution. And you’re right, though, that it’s not a brain in some really very important ways. There’s no episodic memory. Uh, it’s not learning actively. And so it, I guess to your point, it is just, it’s a different thing. The big important thing I’m trying to say here is it’s also just different from all the previous machine learning systems that we’ve tried and successfully inserted into health care and medicine.

Llorens: And to your point, all the thinking around various kinds of societally important frameworks are trying to catch up to that previous generation and not yet even aimed really adequately, I think, at these new technologies. You know, as we start to wrap up here, maybe I’ll invoke Peter Lee, the head of Microsoft Research, again, [LAUGHS] kind of—kind of where we started. This is a watershed moment for AI and for computing research, uh, more broadly. And in that context, what do you see next for computing research?

Lee: Of course, AI is just looming so large and Microsoft Research is in a weird spot. You know, I had talked before about the early days of 3D computer graphics and the founding of NVIDIA and the decade-long kind of industrialization of 3D computer graphics, going from research to just, you know, pure infrastructure, technical infrastructure of life. And so with respect to AI, this flavor of AI, we’re sort of at the nexus of that. And Microsoft Research is in a really interesting position, because we are at once contributors to all of the research that is making what OpenAI is doing possible, along with, you know, great researchers and research labs around the world. We’re also then part of the company, Microsoft, that wants to make this with OpenAI a part of the infrastructure of everyday life for everybody. So we’re part of that transition. And so I think for that reason, Microsoft Research, uh, will be very focused on kind of major threads in AI; in fact, we’ve sort of identified five major AI threads.

One we’ve talked about, which is this sort of AI in society and the societal impact, which encompasses also responsible AI and so on. One that our colleague here at Microsoft Research Sébastien Bubeck has been advancing is this notion of the physics of AGI. There has always been a very important thread of theoretical computer science, uh, in machine learning. But what we’re finding is that that style of research is increasingly applicable to trying to understand the fundamental capabilities, limits, and trend lines for these large language models. And you don’t anymore get kind of hard mathematical theorems, but it’s still kind of mathematically oriented, just like physics of the cosmos and of the Big Bang and so on, so physics of AGI.

There’s a third aspect, which more is about the application level. And we’ve been, I think in some parts of Microsoft Research, calling that costar or copilot, you know, the idea of how is this thing a companion that amplifies what you’re trying to do every day in life? You know, how can that happen? What are the modes of interaction? And so on.

And then there is AI4Science. And, you know, we’ve made a big deal about this, and we still see just tremendous just evidence, in mounting evidence, that these large AI systems can give us new ways to make scientific discoveries in physics, in astronomy, in chemistry, biology, and the like. And that, you know, ends up being, you know, just really incredible.

And then there’s the core nuts and bolts, what we call model innovation. Just a little while ago, we released new model architectures, one called Kosmos, for doing multimodal kind of machine learning and classification and recognition interaction. Earlier, we did VALL-E, you know, which just based on a three-second sample of speech is able to ascertain your speech patterns and replicate speech. And those are kind of in the realm of model innovations, um, that will keep happening.

The long-term trajectory is that at some point, if Microsoft and other companies are successful, OpenAI and others, this will become a completely industrialized part of the infrastructure of our lives. And I think I would expect the research on large language models specifically to start to fade over the next decade. But then, whole new vistas will open up, and that’s on top of all the other things we do in cybersecurity, and in privacy and security, and the physical sciences, and on and on and on. For sure, it’s just a very, very special time in AI, especially along those five dimensions.

Llorens: It will be really interesting to see which aspects of the technology sink into the background and become part of the foundation and which ones remain up close and foregrounded and how those aspects change what it means to be human in some ways and maybe to be—to be intelligent, uh, in some ways. Fascinating discussion, Peter. Really appreciate the time today.

Lee: It was really great to have a chance to chat with you about things and always just great to spend time with you, Ashley.

Llorens: Likewise.

[MUSIC]

The post AI Frontiers: AI for health and the future of research with Peter Lee appeared first on Microsoft Research.

Read More

Research Focus: Week of March 27, 2023

Research Focus: Week of March 27, 2023

Microsoft Research Focus 12 edition, week of March 27, 2023

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft.

NEWS

Bing’s gendered translations tackle bias in machine translation

Machine translation (MT) models are designed to learn from large amounts of data collected from real-world sources. However, this training data may contain implicit biases which may be amplified by the model. One such example is the expression of gender, which can vary widely across different languages. In English, the word “lawyer” can refer to either a male or female individual, whereas in Spanish, “abogada” and “abogado” are used to refer to a female and male lawyer, respectively. As a result, MT models often assign arbitrary genders to animate entities in the translated output, even when the source text does not imply a specific gender.

The Microsoft Translator team has released a feature on Bing Translator which will provide feminine and masculine translations for sentences that have gender neutral words such as “doctor” or “teacher” when translating from English to Spanish, French and Italian. Additionally, to support ongoing research and track progress towards reducing gender bias in MT, the team has published a technical paper outlining their evaluation methodology and test sets. These test sets comprise a linguistically diverse corpus of gender-ambiguous source sentences, along with multiple alternative target language translations.


AWARD

Microsoft researcher honored by Women in AI Netherlands

Rianne van den Berg, a Principal Researcher at Microsoft Research in Amsterdam, has won the AI Researcher award from Women in AI Netherlands.

Rianne was recognized for her work in deep learning and physics. The award announcement noted her published work in journals such as Nature Physics and Physical Review Letters as well as at prominent AI conferences, such as NeurIPS, ICML and ICLR. The organization also cited Rianne’s dedication to diversity and inclusion.

In her role on the AI4Science team at Microsoft Research, Rianne’s research focuses on the intersection between computational chemistry and deep learning, with an emphasis on modeling chemical reactions. Her prior research has spanned topics ranging from generative modeling and variational inference to source compression, graph-structured learning, and condensed-matter physics. She received her PhD in theoretical condensed-matter physics in 2016 at the University of Amsterdam, where she also worked as a postdoctoral researcher as part of the Amsterdam Machine Learning Lab (AMLAB).


Spotlight: On-Demand EVENT

Microsoft Research Summit 2022

On-Demand
Watch now to learn about some of the most pressing questions facing our research community and listen in on conversations with 120+ researchers around how to ensure new technologies have the broadest possible benefit for humanity.

INTERVIEW

Recognizing women in technology

Why are women underrepresented in STEM and AI and how can we close that gap? How is technology shaping society, from gender issues to creativity and collaboration?

Microsoft Research Principal Researcher Cheng Zhang sat down to discuss these issues and more with the UK Chinese Women Connect Association, which recently recognized her as the Highly Commended awardee in the Chinese Women of the Year: Technology category.

In the interview, Cheng talks about her career in technology research and why she came to Microsoft Research Cambridge, where she works with the Machine Intelligence group. The conversation covers the impact of AI, strategies for making an impact—especially at a very large company—and the value of learning from others. Catch a video replay of this fascinating interview.

The post Research Focus: Week of March 27, 2023 appeared first on Microsoft Research.

Read More

Research Focus: Week of March 27, 2023

Research Focus: Week of March 27, 2023

Microsoft Research Focus 12 edition, week of March 27, 2023

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft.

NEWS

Bing’s gendered translations tackle bias in machine translation

Machine translation (MT) models are designed to learn from large amounts of data collected from real-world sources. However, this training data may contain implicit biases which may be amplified by the model. One such example is the expression of gender, which can vary widely across different languages. In English, the word “lawyer” can refer to either a male or female individual, whereas in Spanish, “abogada” and “abogado” are used to refer to a female and male lawyer, respectively. As a result, MT models often assign arbitrary genders to animate entities in the translated output, even when the source text does not imply a specific gender.

The Microsoft Translator team has released a feature on Bing Translator which will provide feminine and masculine translations for sentences that have gender neutral words such as “doctor” or “teacher” when translating from English to Spanish, French and Italian. Additionally, to support ongoing research and track progress towards reducing gender bias in MT, the team has published a technical paper outlining their evaluation methodology and test sets. These test sets comprise a linguistically diverse corpus of gender-ambiguous source sentences, along with multiple alternative target language translations.


AWARD

Microsoft researcher honored by Women in AI Netherlands

Rianne van den Berg, a Principal Researcher at Microsoft Research in Amsterdam, has won the AI Researcher award from Women in AI Netherlands.

Rianne was recognized for her work in deep learning and physics. The award announcement noted her published work in journals such as Nature Physics and Physical Review Letters as well as at prominent AI conferences, such as NeurIPS, ICML and ICLR. The organization also cited Rianne’s dedication to diversity and inclusion.

In her role on the AI4Science team at Microsoft Research, Rianne’s research focuses on the intersection between computational chemistry and deep learning, with an emphasis on modeling chemical reactions. Her prior research has spanned topics ranging from generative modeling and variational inference to source compression, graph-structured learning, and condensed-matter physics. She received her PhD in theoretical condensed-matter physics in 2016 at the University of Amsterdam, where she also worked as a postdoctoral researcher as part of the Amsterdam Machine Learning Lab (AMLAB).


Spotlight: On-demand video

AI Explainer: Foundation models ​and the next era of AI

Explore how the transformer architecture, larger models and more data, and in-context learning have helped advance AI from perception to creation.

INTERVIEW

Recognizing women in technology

Why are women underrepresented in STEM and AI and how can we close that gap? How is technology shaping society, from gender issues to creativity and collaboration?

Microsoft Research Principal Researcher Cheng Zhang sat down to discuss these issues and more with the UK Chinese Women Connect Association, which recently recognized her as the Highly Commended awardee in the Chinese Women of the Year: Technology category.

In the interview, Cheng talks about her career in technology research and why she came to Microsoft Research Cambridge, where she works with the Machine Intelligence group. The conversation covers the impact of AI, strategies for making an impact—especially at a very large company—and the value of learning from others. Catch a video replay of this fascinating interview.

The post Research Focus: Week of March 27, 2023 appeared first on Microsoft Research.

Read More

AI Explainer: Foundation models ​and the next era of AI

The release of OpenAI’s GPT-4 is a significant advance that builds on several years of rapid innovation in foundation models. GPT-4, which was trained on the Microsoft Azure AI supercomputer, has exhibited significantly improved abilities across many dimensionsfrom summarizing lengthy documents, to answering complex questions about a wide range of topics and explaining the reasoning behind those answers, to telling jokes and writing code and poetry.

Microsoft Senior Principal Research Manager Ahmed H. Awadallah was among a group of researchers across the company who have worked in partnership with OpenAI over several months to evaluate this new model’s capabilities. In this video, recapped below, he tells the story of the technical innovations in recent years that have brought us to this moment: the surprising progress of GPT-4’s predecessor models, leading up to the capabilities demonstrated in ChatGPT, and the integration of the latest models into Bing.

While watching this video, you can hover to see video chapter titles and jump directly to those you’re interested in.

Introduction to foundation models [00:00-11:01]

Over the last decade, AI has made significant progress on perception tasks like image recognition and language processing. More recently, the field is witnessing new advances in the form of generative AI, underpinned by a class of large-scale models known as foundation models. Foundation models are trained on massive amounts of data and are capable of performing a wide range of tasks. With a simple natural language prompt like “describe a scene of the sun rising over the beach,” generative AI models can output a detailed description or produce an image based on the generated description, which can then be animated or even turned into video. Many recent language models are not only good at generating text but also generating, explaining, and debugging code.

Listen in at 1:37

Three components have been driving these advances:

  • The transformer architecture: A popular choice across modalities, the transformer architecture is efficient, easy to scale and parallelize, and can model interdependence between different components in input and output data.
  • Scale: Growing model size and the use of increasingly large amounts of data have resulted in what is being termed as “emerging capabilities.” When models reach a critical size, they begin displaying capabilities not previously present.
  • In-context learning: Showing potential on a range of applications, from text classification to translation and summarization, this new training paradigm provides pre-trained models with instructions for new tasks or just a few examples instead of training or fine-tuning models on labeled data. Because no additional data or training is needed and prompts are provided in natural language, models can be applied right out of the box and aren’t limited to those with developer experience.

From GPT-3 to ChatGPT – a jump in generative capabilities [11:02-19:07]

With the November 2022 release of ChatGPT, a language model optimized for dialogue, we saw exciting developments in text generation. Compared with GPT-3, an earlier language model in the GPT family, ChatGPT not only provides longer, more thorough, and more structured responses to questions and instructions but can also produce answers in different styles, or tones, and tailor explanations to different audiences, like a child, a first-year college student, or someone with a PhD.

Earlier language models such as GPT-3 were trained to predict the next word in a sentence using large amounts of text from the web with no direct human supervision. Several additional training approaches have helped fuel the improved performance of later models such as ChatGPT. These models are being trained on code in addition to text, which seems to be providing another opportunity to identify the relationship between different parts of speech. This is resulting in models that are better at following instructions and reasoning than models trained on text alone. Human-generated data is also contributing to better outputs. Instruction tuning adds the step of training models on prompts and responses created by a human, while model-generated responses ranked by a human are being employed to train a reward model that can be used to train the main model with reinforcement learning.

The fast-paced advancements demonstrated by these models have challenged one of the traditional methods used to measure progress: benchmarks. Improvements are happening so fast that benchmarks are becoming obsolete, with many solved or saturated as quickly as they come out.

Everyday impact: Integrating foundation models and products [19:09-27:20]

Foundation models are already appearing in products available today. For example, GitHub Copilot leverages OpenAI Codex to assist in writing code. The AI pair programmer has been shown to not only make developers feel more productive but to support them in actually getting more done. A GitHub study found participants using Copilot were 55 percent more productive than participants without access to Copilot.

Combining language models optimized for dialogue with external knowledge sources and tools is another avenue for improved experiences. The new Bing, for instance, brings together these models and search. Years of research have yielded insight into the web search experience; much of it involves reviewing and synthesizing information across a variety of resources identified via multiple queries, which is time-consuming. The new Bing can do the heavy lifting for the searcher, working behind the scenes to make the necessary queries, collect results, synthesize the information, and present a single complete answer.

Large language models and foundation models more broadly are not without their limitations, however. There are issues such as reliability, accuracy, staleness, and provenance that need to be explored. Additionally, each specific application of one of these models comes with its own challenges and opportunities. For example, in applying foundation models to web search, we need to rethink the overall user experience, including how people interact with search and how we improve, measure, and personalize the experience over time.

Listen in at 27:48

Transcript

Introduction to foundation models [00:00–11:01]

Hello, everyone. My name is Ahmed Awadallah. I am a researcher here at Microsoft Research. Today, I am going to be talking about foundation models and the impact they are having on the current era of AI.

If we look back at the last five to 10 years, AI has been making significant impact on many perception tasks like image and object recognition, speech recognition, and most recently on language understanding tasks, where we have been seeing different AI models achieving superior performance and in many cases reaching performance equal to what a human annotator would do on the same task. Over the last couple of years, though, the frontier of AI has changed toward generative AI. 


We have had quite good text generation models for some time. You could actually prompt a model with asking it to describe an imaginary scene, and it will produce a very good description of what you have asked it to do. And then we started making a lot of progress on image generation, as well. With models like DALL-E 2 and Imagen and even models coming out from such startups like Midjourney and Stability AI, we have been getting to a level of quality of image generation that we have never seen before. Inspired by that, there has been also a lot of work on animating the generated images or even generating videos from scratch. Another frontier for generative models has been code, and not only generating code based on text prompt but also explaining the code or in some cases even debugging the code. I was listening to this episode of the Morning Edition on NPR when it aired at the beginning of February where they were attempting to use a bunch of AI models for producing a schematic design of a rocket and also for coming up with some equations for the rocket design. And, of course, the hypothetical design would have crashed and burned, but I couldn’t help but think how exciting it is that AI has become so good that we are even attempting to measure its proficiency on a field as complex as rocket science.

[2:11] If we look back, we will find that there are three main components that led to the current performance we are seeing from AI models: the transformer architecture, the scale, and in-context learning. Transformer in particular has been dominating the field of AI for the previous years. At the beginning, we started with natural language processing, and the architecture was very efficient that it took over the field of natural language processing within a very short amount of time. The transformer is a very efficient architecture that’s easy to scale, easy to parallelize, and relies on its heart at the attention mechanism, a technique that allows us to model interdependence between different components or different tokens in our input and output data. Transformers started off mostly in natural language processing, but slowly but surely, they made their way to pretty much any modality. So now we are seeing that models that are operating on images, on videos, on audio, and many other modalities are also using transformers. Five years later since their inception and transformers have surprisingly changed little compared to when they started despite so many attempts at producing better and more efficient variants of transformers, perhaps because of the gains were limited to certain use cases or perhaps because the gains did not persist at scale. Another potential reason is that maybe they made the architecture less universal, which has been one of its more—of its biggest advantages.

[03:53] The next point is scale, and when we talk about scale, we really mean the amount of compute that’s being used to train the model, and that can be translated into either training bigger and bigger models with larger and larger number of parameters—and we have been seeing a steady increase of that over the previous years—but scale could also mean more data, using more data to train the model on larger and larger amounts of data. And we have seen different models over the previous few years taking different approaches in deciding how much data and how large the model is. But the consistent trend is that we have been scaling larger and larger and using more and more compute. Scale has also led to what is being called as “emerging capabilities.” And that’s one of the most interesting properties of scale that have been described over the previous year or so. By emerging capability, we mean that the model starts to show a certain ability that appears only when it reaches a critical size. Before that, the model is not demonstrating any of this ability at all. For example, let’s look at the figures here, and on the left-hand side, we see arithmetic. If we try to use language models to solve arithmetic word problems, up until a certain scale, they absolutely cannot solve the problem in any way, and they do not perform any better than random. But then at a certain critical point, we start seeing improved performance, and that performance just keeps getting better and better. And we have seen that at so many other tasks, as well, ranging from arithmetic to transliteration to multitask learning.

[05:38] And perhaps one of the most exciting emerging capabilities of language models recently is their ability to in-context learn, which has been introducing a new paradigm for using these models. If we take a look back at how we have been practicing machine learning in general, with deep learning, you would start by choosing an architecture, a transformer or before that an RNN or CNN, and then you fully supervise train your model. You have a lot of labeled data, and you train your model based on that data. When we started getting into pre-trained models, we instead of training models from scratch, we actually start off with a pre-trained model and then fine-tune it still on a lot of fully supervised labeled data for the task at hand. But then with in-context learning, suddenly we can actually use the models out of the box. We can just use a pre-trained model and use a prompt in order to learn—in order to perform a new task without actually doing any learning. We can do that in zero-shot settings, meaning we do not provide any examples at all, just instructions or a description of what the task is, or in a few-shot setting, where we just provide a small handful number of examples to the model. For example, if we are interested in trying to do text classification, we can just—in this case sentiment analysis—we can just provide the text to the model and ask it to classify the text into either positive or negative. If the task is a little bit harder, we can provide few-shot samples, just a few examples of how do we want the model to classify things into, say, positive, negative, or neutral, and then ask the model to reason about a new piece of text, and it actually does pretty good at it. And it’s not only simple tasks like text classification. We can do translation or summarization and much more complex tasks with that paradigm. We can even try to do things like arithmetic where we try to give the model a word problem and ask it to come up with the answer. On the example we are showing right now, we did give the model just one sample to show it how we would solve a problem and then ask it to solve another problem. But in that particular case, the model actually failed. It did produce an answer, but it was not the correct answer. But then came the idea of chain-of-thought prompts, where instead of just showing the model the input and the output, we can actually also show it the steps it can take in order to get to that output from that particular input. In that case, we are just solving the arithmetic word problem step by step and showing an example of that to the model. When we do that, the models are not only able to produce the correct answer, but they are also able to walk us step by step through how they produced that answer. That mechanism is referred to as a chain-of-thought prompting, and it has been very prominently used in so many tasks and showing very superior performance on multiple tasks. It has been also used in many different ways, including in fine-tuning and training some of the models. The “pre-train and then fine-tune” paradigm have been established paradigm for years, since maybe the inception of BERT and similar pre-trained language models. But now you would see that there’s increased shift into using the models by prompting them instead of having to fine-tune them. That’s evident in a lot of practical usage of the models but even in the publications in the machine learning areas that have been using natural language processing tasks and switching into using prompting instead of using fine-tuning. In-context learning and prompting matters a lot because it’s actually changing the way we apply the models to new tasks. The ability of applying the models to new tasks out of the box without collecting additional data, without doing any additional training, is an amazing ability that increases the amount of tasks that can be applied—the models can be applied to and also reduces the amount of effort needed into building models with these tasks.

[09:57] The performance has been also amazing by just providing only a few examples, and the tasks in this setting are being adapted to the model rather than the models being adapted to the tasks. If you think about the fine-tuning paradigm, what we did is that we already had the pre-trained model and we were fine-tuning it to adapt to the task. Now we are trying to frame the task in a way that’s more friendly to how the model is being trained so that the model can perform well on the task even without any fine-tuning. Finally, this allows the humans to interact with the models in their normal form of communication, in natural language. We can just give instructions describing the task that we want, and the model would perform the task. And that blurs the line between who is an ML user and who is an ML developer because now anyone can just prompt and describe different tasks to the language model and get the language model to do a large number of tasks without having to have any training or any development involved.

From GPT-3 to ChatGPT—a jump in generative capabilities [11:02–19:07]

[11:02] Now looking back at the last three months or so, we have been seeing the field changing quite a bit and a tremendous amount of excitement happening around the release of the ChatGPT model. And if we think about the ChatGPT model as a generative model, we would see that there has been other generative models out there from the GPT family and other models, as well, that have been doing a decent job at text generation. So you can take one of these models, in this case GPT-3, and prompt it to the question asking it to explain what the foundational language model means and it would give you a pretty decent answer. You can ask the same question to ChatGPT and you’ll find that it’s able to provide a much better answer. It’s longer; it’s more thorough; it’s more structured. You can ask it to style it in different ways. You can ask it to simplify it in different ways. And all of these are capabilities that the previous generation of the models could not really do. If we look at how ChatGPT is described, the description lists different things, but it’s mostly optimized for dialogue, allowing the humans to interact in natural language. It’s much better at following instructions and so on and so forth. If we look at step by step about how this actually was manifested in the training, we will see from the description that looking at base models that ChatGPT was built on and other models before ChatGPT, that language model training was following a self-supervised pre-training approach, where we have a lot of unsupervised language, web-scale language, that we are training the models on, and the models in this particular case are trained with an autoregressive next word prediction approach. So we are looking at an input context, which is a sentence or a part of a sentence, and trying to predict the next word. But then over the last year or so, we have been seeing a shift where models are being trained not just on text but also on code. For example, GPT-3.5 models are trained on both text and code, and surprisingly, training the models on both text and codes improves their performance on many tasks that has nothing to do with code. On the figure we see right now, we see different models being compared on—models that were trained with code and models that were not trained with code—and we are seeing that the models that were trained with both text and code show better performance at following task instructions, show better performance at reasoning, compared to similar models that were trained on text only. So the training on code seems to be grounding the models in different ways, allowing them to learn a little bit more about how to reason, about how to look at structured relation between different parts of the text.

[13:59] The second main difference is the idea of instruction tuning, which has been—what you have been seeing becoming more and more popular over different models over the last year, maybe starting with InstructGPT that introduced the idea of training the models on human-generated data. And this is a departure from the traditional self-supervised approach, where we have been only training the model on unsupervised, free, unstructured text. Now there’s an additional step in the training process that actually trains the models on human-generated data. The human-generated data takes the format of prompt and the response, and it’s trying to teach the model to respond in a particular way given a prompt, and this step of instruction tuning has been actually helping the models get a lot better, especially in zero-shot performance. And we see here that the instruction-tuned models tend to perform a lot better than their non-instruction–tuned counterpart, especially in zero-shot settings. And the last step of the training process introduces yet another human-generated data. In this case, we actually have different responses generated by the model and we have a human providing preferences to all these responses so in a sense ranking responses and choosing which response is better than other responses. This data is used to train a reward model that can then be used to actually train the main model with reinforcement learning. And this approach further aligns the model into responding in certain ways that correspond to the way the human has been providing the feedback data. This notion of training the model with human feedback data is very interesting, and it’s creating a lot of traction with many people thinking about the best technique to train on human feedback data, the best form of human feedback to collect, to train the model on, and it would probably help us improve the models even further in the near future.

[16:02] Now with all these advances we have been seeing, the pace of innovation and the acceleration of the advances have been moving so fast that it has been very challenging in so many ways, but perhaps one of the most profound ways it has been challenging with is the notion of benchmarking, that traditionally research in machine learning has been very dependent on using very solid benchmarks on measuring the progress of different approaches. But the pace of innovation has been really challenging that recently. To understand how fast the progress has been, let’s look at this data coming from Hypermind, a forecasting company that uses crowd forecasting and has been doing that—tracking some of the AI benchmarks recently. The first benchmark is Massive Multitask Language Understanding benchmark, a large collection of language understanding tasks. In June of 2021, a forecast was made that in a year, by June 2022, we will get to around 57 performance on this task. But in reality, what happens is that by June 2022, we were at around 67 percent, and a couple of months later, we were at 75 percent, and we keep seeing more and more fast improvements after that. A second task is the MATH task, which is a collection of middle and high school math problems, and here the prediction was that in a year, we will get to around 13 percent. But in reality, we ended up going much more beyond that within one year, and we still see more and more advances happening at a faster-than-ever-expected pace. That rate of improvement is actually resulting in a lot of the benchmarks being saturated really fast.

[17:51] If we look back at benchmarks like MNIST and Switchboard, it took the community 20-plus years in order to fully saturate these benchmarks. And that has been accelerating, accelerating to the point where now we see benchmarks being saturated in a year or less. In fact, many of the benchmarks are becoming obsolete to the point that only 66 percent of machine learning benchmarks have received more than three results at different time points, and many of them are solved or saturated soon after they are being released. And that actually motivated the community to come together with very large efforts to try to design benchmarks that are designed specifically to challenge large language models. In that particular case, with BIG-bench, more than 400 authors from over 100 institutions came together to create it. But even with such an elaborate effort, we are seeing very fast progress, and with large language models and chain-of-thought prompting that we discussed earlier, we are seeing that we are making very fast progress against the hardest tasks in BIG-bench, and in many of them, models are already performing better than humans right now.

Everyday impact: Integrating foundation models and products [19:09–27:20]

[19:09] The foundation models are not only getting better and better at benchmarks, but they are actually changing many products that we use every day. We mentioned code generation earlier, so let’s talk a little bit about Copilot. GitHub Copilot is a new experience that helps developers write code, and Copilot is very interesting in many perspectives. One is how fast it went from the model being created in research to how—to the point it made it as a product generally available in GitHub Copilot but also in how much user value it has been generating. This study that was done by the Copilot GitHub team was looking at quantifying the value these models were providing to developers. And in the first part of the study, they asked different questions to the developers, trying to assess how useful the models are, and we see that 88 percent of the participants reported that they feel like they are much more productive when using Copilot than before, and they reported many other positive implications on their productivity, as well. But perhaps even more interesting, the study did a controlled study where there were two groups of developers trying to solve the same set of tasks. A group of them had access to Copilot, and the other group did not, and interestingly, the group that had access to Copilot not only finished the tasks at a higher success rate but also at a much more efficient rate. Overall, they were 55 percent more productive. Fifty-five percent more productivity in a coding scenario is an amazing progress that a lot of people would have been very surprised to think about a model like Copilot performing so fast with such value.

[21:10] Now beyond code generation and text generation, another frontier where these models are starting to shine is when we start connecting them with external knowledge sources and external tools. Language models that have been optimized for dialogue have amazing language capabilities; they do really good at understanding language, at following instructions. They also do really well at synthesizing and generating answers. They are also conversational in nature and do store knowledge from the training data that they were trained on. But they do have a lot of limitations around reliability, factualness, staleness, access to more recent information that was not part of the training data, provenance, and so on. And that’s why connecting these models to external knowledge sources and tools could be super exciting. Let’s talk about, for example, connecting language models to search as we have seen recently with the new Bing.

[22:14] If we take a look back years ago, there was many, many studies studying web search, studying tasks that people try to complete in web search scenarios. And many of these tasks were deemed as complex search tasks, tasks that are not navigational, as in trying to go to a particular website, or that are not simple informational tasks where you are trying to look up a fact that you can quickly get with one query but more complex tasks that involve multiple queries. Maybe you are planning a travel, maybe you are trying to buy a product, and as part of your research process, there are multifaceted queries that you would like to look at. There has been a lot of research understanding user behavior with such tasks and how prevalent they are and how much time and effort people spend in order to perform them. And they typically involve spending a significant amount of time with the search engine, reading and synthesizing information from different sources with different queries. But with a new experience like the experience Bing is providing, we can actually take one of these queries and provide much more complex long queries to the search engine. And the search engine uses both search and the power of the language model to generate multiple queries, get the results of all of these queries, and synthesize a detailed answer back to the searcher. Not only that, but it can recommend additional searches and additional ways you could interact with the search engine in order to learn more. That has the potential of saving a lot of time and a lot of effort for many searchers in supporting these complex search tasks in a much better way. Not only that, but there are some of these complex search tasks that are multistep in nature, where I would start with one query and then follow up with another query based on the information I get from the first query. Imagine that I am doing this search before the Super Bowl where I am trying to understand some comparisons, stats, between the two quarterbacks that are going to face each other, and I start with that query. What the search engine did in that particular case is that it actually started with a query where it was trying to identify who are the two quarterbacks that are going to be playing in the Super Bowl. And if I have done that as a human, I would have done that. I would have identified the teams and the two quarterbacks, and then maybe I would follow up with another query where I would actually search for the stats of the two quarterbacks I am asking about, and get that and actually synthesize the information maybe from different results and then get to the answer I am looking for. But with the new Bing experience, I can just issue the query and all of that is happening in the background. Different search queries are being generated, submitted to the search engine, recent results are getting collected, and a single answer is being synthesized and displayed, making me as a searcher much more productive and much more efficient.

[25:21] The potential of LLM integrated—large language models integrated with search and other tools is very huge and can add much, much value to so many scenarios. But there are also a lot of challenges and a lot of opportunities and a lot of limitations that needs to be addressed. Reliability and safety are one of them; making the models more accurate; thinking about trust, provenance, and bias. User experience and behavior and how the new experience would affect how the users are interacting with the search engine is another one, with new and different tasks or different user interfaces or even different behavior models. Search has been a very well-studied experience, and we have very good understanding of how users interact with the search engine and very reliable behavior models to predict that. Changing this experience will require a lot of additional study there. Personalization and managing user preferences and search history and so on and so forth has also been a very well-studied field in web search, and with new experiences like that, we have so many opportunities and thinking about things like personalization and user experience again but also evaluation and what do metrics mean. How do we measure user satisfaction? How do we understand good and bad abandonment? Good abandonment as in when people get satisfied with the result but they don’t have to click on anything on the search result page, and bad abandonment being the opposite of that. Thinking about feedback loops, which has been playing a large part in improving search engines, and how can we apply them to new experiences and new scenarios. So while integrating language models with an experience like search and other tools and experiences is very exciting, it’s actually also creating so many opportunities for new research problems or for revisiting previous search problems that we had very good understanding for.

Conclusion [27:21–28:37]

[27:21] To conclude, we have been seeing incredible advancing with AI over the past couple of years. The progress has been accelerating and outpacing expectations in so many ways, and the advances are not only in terms of academic benchmarks and publications, but we are also seeing an explosion of applications that are changing the products that we use every day. However, we are really much closer to the beginning of a new era with AI than we are to the end state of AI capabilities. There are so many opportunities, and we will probably see a lot more advances and even more accelerated progress over the coming month and years. And there are so many challenges that remain and many new opportunities that are arising because of the state of where these models are. It’s a very exciting time for AI, and we are really looking forward to seeing the advances that will happen moving forward and to the applications that will result from these advances and how they will affect every one of us with the products we use every day. Thank you so much.

[END]

The post AI Explainer: Foundation models ​and the next era of AI appeared first on Microsoft Research.

Read More

AI Frontiers: The Physics of AI with Sébastien Bubeck

AI Frontiers: The Physics of AI with Sébastien Bubeck

podcast: Sebastien Bubeck

Episode 136 | March 23, 2023

Powerful new large-scale AI models like GPT-4 are showing dramatic improvements in reasoning, problem-solving, and language capabilities. This marks a phase change for artificial intelligence—and a signal of accelerating progress to come.

In this new Microsoft Research Podcast series, AI scientist and engineer Ashley Llorens hosts conversations with his collaborators and colleagues about what these new models—and the models that will come next—mean for our approach to creating, understanding, and deploying AI, its applications in areas such as health care and education, and its potential to benefit humanity.

The first episode features Sébastien Bubeck, who leads the Machine Learning Foundations group at Microsoft Research in Redmond. He and his collaborators conducted an extensive evaluation of GPT-4 while it was in development, and have published their findings in a paper that explores its capabilities and limitations—noting that it shows “sparks” of artificial general intelligence.

Transcript

Ashley Llorens: I’m Ashley Llorens with Microsoft Research. I spent the last 20 years working in AI and machine learning. But I’ve never felt more fortunate to work in the field than at this moment. Just this month, March 2023, OpenAI announced GPT-4, a powerful new large-scale AI model with dramatic improvements in reasoning, problem-solving, and much more. This model, and the models that will come after it, represent a phase change in the decades-long pursuit of artificial intelligence.

In this podcast series, I’ll share conversations with fellow researchers about our initial impressions of GPT-4, the nature of intelligence, and ultimately how innovations like these can have the greatest benefit for humanity.


Today I’m sitting down with Sébastien Bubeck, who leads the Machine Learning Foundations Group at Microsoft Research. In recent months, some of us at Microsoft had the extraordinary privilege of early access to GPT-4. We took the opportunity to dive deep into its remarkable reasoning, problem-solving, and the many other abilities that emerge from the massive scale of GPT-4.

Sébastien and his team took this opportunity to probe the model in new ways to gain insight into the nature of its intelligence. Sébastien and his collaborators have shared some of their observations in the new paper called “Sparks of Artificial General Intelligence: Experiments with an early version of GPT-4.”
Welcome to A.I. Frontiers.

Sébastien, I’m excited for this discussion.

The place that I want to start is with what I call the AI moment. So, what do I mean by that? In my experience, everyone that’s picked up and played with the latest wave of large-scale AI models, whether it’s ChatGPT or the more powerful models coming after, has a moment.

They have a moment where they’re genuinely surprised by what the models are capable of, by the experience of the model, the apparent intelligence of the model. And in my observation, the intensity of the reaction is more or less universal. Although everyone comes at it from their own perspective, it triggers its own unique range of emotions, from awe to skepticism.

So now, I’d love from your perspective, the perspective of a machine learning theorist: what was that moment like for you?

Sébastien Bubeck: That’s a great question to start. So, when we started playing with the model, we did what I think anyone would do. We started to ask mathematical questions, mathematical puzzles. We asked it to give some poetry analysis. Peter Lee did one on Black Thought, which was very intriguing. But every time we were left wondering, okay, but maybe it’s out there on the internet. Maybe it’s just doing some kind of pattern matching and it’s finding a little bit of structure. But this is not real intelligence. It cannot be. How could it be real intelligence when it’s such simple components coming together? So, for me, I think the awestruck moment was one night when I woke up and I turned on my laptop and fired up the Playground.

And I have a three-year-old at home, my daughter, who is a huge fan of unicorns. And I was just wondering, you know what? Let’s ask GPT-4 if it can draw a unicorn. And in my professional life, I play a lot with LaTeX, this programing language for mathematical equations. And in LaTeX there is this subprogramming language called TikZ to draw images using code. And so I just asked it: can you draw a unicorn in TikZ. And it did it so beautifully. It was really amazing. You can render it and you can see the unicorn. And no, it wasn’t a perfect unicorn.

What was amazing is that it drew a unicorn, which was quite abstract. It was really the concept of a unicorn, all the bits and pieces of what makes a unicorn, the horn, the tail, the fur, et cetera. And this is what really struck me at that moment. First of all, there is no unicorn in TikZ online.

I mean, who would draw a unicorn in a mathematical language? This doesn’t make any sense. So, there is no unicorn online. I was pretty sure of that. And then we did further experiments to confirm that. And we’re sure that it really drew the unicorn by itself. But really what struck me is this getting into what is a concept of a unicorn, that there is a head, a horn, the legs, et cetera.

This has been a longstanding challenge for AI research. This has always been the problem with all those AI systems that came before, like the convolutional neural networks that were trained on ImageNet and image datasets and that can recognize whether there is a cat or dog in the image, et cetera. Those neural networks, it was always hard to interpret them. And it was not clear how they were detecting exactly whether there is a cat or dog in particular that was susceptible to these adversarial examples like small perturbations to the input that would completely change the output.

And it was understood that the big issue is that they didn’t really get the concept of a cat or dog. And then suddenly with GPT-4, it was kind of clear to me at that moment that it really understood something. It really understands what is a unicorn. So that was the moment for me.

Ashley Llorens: That’s fascinating. What did you feel in that moment? Does that change your concept of your field of study, your relationship to the field?

Sébastien Bubeck: It really changed a lot of things to me. So first of all, I never thought that I would live to see what I would call a real artificial intelligence. Of course, we’ve been talking about AI for many decades now. And the AI revolution in some sense has been happening for a decade already.

But I would argue that all the systems before were really this narrow intelligence, which does not really rise to the level of what I would call intelligence. Here, we’re really facing something which is much more general and really feels like intelligence. So, at that moment, I felt honestly lucky. I felt lucky that I had early access to this system, that I could be one of the first human beings to play with it.

And I saw that this is really going to change the world dramatically. And selfishly, (it) is going to change my field of study, as you were saying. Now suddenly we can start to attack: what is intelligence, really? We can start to approach this question, which seemed completely out of reach before.

So really deep down inside me, incredible excitement. That’s really what I felt. Then upon reflection, in the next few days, there was also some worry, of course. Clearly things are accelerating dramatically. Not only did I never think that I would live to see a real artificial intelligence, but the timeline that I had in mind ten years ago or 15 years ago when I was a Ph.D. student, I saw maybe by the end of the decade, the 2010s, maybe at that time, we will have a system that can play Go better than humans.

That was my target. And maybe 20 years after that, we will have systems that can do language. And maybe somewhere in between, we will have systems that can play multiplayer games like Starcraft II or Dota 2. All of those things got compressed into the 2010s.

And by the end of the 2010s, we had basically solved language in a way with GPT-3. And now we enter the 2020s and suddenly something totally unexpected which wasn’t in the 70 years of my life and professional career: intelligence in our hands. So, it’s just changing everything and this compressed timeline, I do worry where is this going.

There are still fundamental limitations that I’m sure we’re going to talk about. And it’s not clear whether the acceleration is going to keep going. But if it does keep going, it’s going to challenge a lot of things for us as human beings.

Ashley Llorens: As someone that’s been in the field for a while myself, I had a very similar reaction where I felt like I was interacting with a real intelligence, like something deserving of the name artificial intelligence—AI. What does that mean to you? What does it mean to have real intelligence?

Sébastien Bubeck: It’s a tough question, because, of course, intelligence has been studied for many decades. And psychologists have developed tests of your level of intelligence. But in a way, I feel intelligence is still something very mysterious. It’s kind of—we recognize it when we see it. But it’s very hard to define.

And what I’m hoping is that with this system, what I want to argue is that basically, it was very hard before to study what is intelligence, because we had only one example of intelligence. What is this one example? I’m not necessarily talking about human beings, but more about natural intelligence. By that, I mean intelligence that happened on planet Earth through billions of years of evolution.

This is one type of intelligence. And this was the only example of intelligence that we had access to. And so all our series were fine-tuned to that example of intelligence. Now, I feel that we have a new system which I believe rises to the level of being called an intelligence system. We suddenly have two examples which are very different.

GPT-4’s intelligence is comparable to human in some ways, but it’s also very, very different. It can both solve Olympiad-level mathematical problems and also make elementary school mistakes when adding two numbers. So, it’s clearly not human-like intelligence. It’s a different type of intelligence. And of course, because it came about through a very different process than natural evolution, you could argue that it came about through a process which you could call artificial evolution.

And so I’m hoping that now that we have those two different examples of intelligence, maybe we can start to make progress on defining it and understanding what it is. That was a long-winded answer to your question, but I don’t know how to put it differently.

Basically, the way for me to test intelligence is to really ask creative questions, difficult questions that you do not find, online and (through) search. In a way, you could ask: is Bing, is Google, are search engines intelligent? They can answer tough questions. Are these intelligent systems? Of course not. Everybody would say, no.

So, you have to distinguish, what is it that makes us say that GPT-4 is an intelligent system? Is it just the fact that it can answer many questions? No, it’s more that it can inspect, it answers. It can explain itself. It can interact with you. You can have a discussion. This interaction is really of the essence of intelligence to me.

Ashley Llorens: It certainly is a provocative and unsolved kind of question of: what is intelligence. And perhaps equally mysterious is how we actually measure intelligence. Which is a challenge even for humans. Which I’m reminded of with young kids in the school system, as I know you are or will be soon here as a father.

But you’ve had to think differently as you’ve tried to measure the intelligence of GPT-4. And you alluded to…I’d say the prevailing way that we’ve gone about measuring the intelligence of AI systems or intelligent systems is through this process of benchmarking, and you and your team have taken a very different approach.

Can you maybe contrast those?

Sébastien Bubeck: Of course, yeah. So maybe let me start with an example. So, we use GPT-4 to pass mock interviews for software engineer positions at Amazon and at Google and META. It passes all of those interviews very easily. Not only does it pass those interviews, but it also ranks in the very top of the human beings.

In fact, for the Amazon interview, not only did it pass all the questions, but it scored better than 100% of all the human users on that website. So, this is really incredible. And headlines would be, GPT-4 can be hired as a software engineer at Amazon. But this is a little bit misleading to view it that way because those tests, they were designed for human beings.

They make a lot of hidden assumptions about the person that they are interviewing. In particular, they will not test whether that person has a memory from one day to the next. Of course, human beings remember what they did the next day, unless there is some very terrible problem.

So, they all face those benchmarks of intelligence. At least they face this issue that they were designed to test for human beings. So, we have to find new ways to test intelligence when we’re talking about the intelligence of AI systems. That’s point number one. But number two is so far in the machine learning tradition, we have developed lots of benchmarks to test a system, a narrow AI system.

This is how the machine learning community has made progress over the decades—by beating benchmarks, by having systems that keep improving, percentage by percentage over those target benchmarks. Now, all of those become kind of irrelevant in the era of GPT-4 for two reasons. Number one is GPT-4—we don’t know exactly what data it is being trained on and in particular it might have seen all of these datasets.

So really you cannot separate anymore the training data and the test data. This is not really a meaningful way to test something like GPT-4 because it might have seen everything. For example, Google came out with a suite of benchmarks, which they called Big Bench, and in there they hid the code to make sure that you don’t know the code and you haven’t seen this data, and of course GPT-4 knows this code.

So, it has seen all of Big Bench. So, you just cannot benchmark it against Big Bench. So, that’s problem number one for the classical ML benchmark. Problem number two is that all those benchmarks are just too easy. It’s just too easy for GPT-4. It crushes all of them, hands down. Very, very easily.

In fact, it’s the same thing for the medical license exam for a multi-state bar exam. All of those things it just passes very, very easily. And the reason why we have to go beyond this is really beyond the classical ML benchmark, we really have to test the generative abilities, the interaction abilities. How is it able to interact with human beings? How is it able to interact with tools?

How creative can it be at the task? All of those questions, it’s very hard to benchmark them. Our own hard benchmark, whether there is one right solution. Now, of course, the ML community has grappled with this problem recently because generative AI has been in the works for a few years now, but the answers are still very tentative.

Just to give you an example, imagine that you want to have a benchmark where you describe a movie and you want to write a movie review. Let’s say, for example, you want to tell the system, write a positive movie review about this movie. Okay. The problem is in your benchmark. In the data, you will have examples of those reviews. And then you ask your system to write its own review, which might be very different from what you have in your training data. So, the question is, is it better to write something different or is it worse? Do you have to match what was in the training data? Maybe GPT-4 is so good that it’s going to write something better than what the humans wrote.

And in fact, we have seen that many, many times the training data was crafted by humans and GPT-4 just does a better job at it. So, it gives better labels if you want than what the humans did. It cannot even compare to humans anymore. So, this is a problem that we are facing as we are writing our paper, trying to assess GPT-4’s intelligence.

Ashley Llorens: Give me an example where the model is actually better than the humans.

Sébastien Bubeck: Sure. I mean, let me think of a good one. I mean, coding—it is absolutely superhuman at coding. We already alluded to this and this is going to have tremendous implications. But really coding is incredible. So, for example, going back to the example of movie reviews, there is this IMDB dataset which is very popular in machine learning where you can ask many basic questions that you want to ask.

But now in the era of GPT-4, you can give the IMDB dataset and you can just ask GPT-4—can you explore the dataset. And it’s going to come up with suggestions of data analysis ideas. Maybe it would say, maybe we want to do some clustering, maybe you want to cluster by the movie, directors, and you would see which movies were the most popular and why.

It can come up creatively with its own analysis. So that’s one aspect—differently coding data analysis. It can be very easily superhuman. I think in terms of writing, its writing capabilities are just astounding. For example, in the paper, we asked it many times to rewrite parts of what we wrote, and it writes it in this much more lyrical way, poetic way.

You can ask for any kind of style that you want. It’s really at the level I would say at this far in my novice eyes, I would say it’s at the level of some of the best authors out there. There is its style and this is really native. You don’t have to do anything.

Ashley Llorens: Yeah, it does it does remind me a little bit of the AlphaGo moment or maybe more specifically the AlphaZero moment, where all of a sudden, you kind of leave the human training data behind. And you’re entering into a realm where it’s its only real competition. You talked about the evolution that we need to have of how we measure intelligence from ways of measuring narrow or specialized intelligence to measuring more general kinds of intelligence.

And we’ve had these narrow benchmarks. You see a lot of this, kind of past the bar exam, these kinds of human intelligence measures. But what happens when all of those are also too easy? How do we think about measurement and assessment in that regime?

Sébastien Bubeck: So, of course, I want to say maybe it’s a good point to bring up the limitations of the system also. Right now a very clear frontier that GPT-4 is not stepping over is to produce new knowledge to discover new things, for example, let’s say in mathematics, to prove mathematical theorems that humans do not know how to prove.
Right now, the systems cannot do it. And this, I think, would be a very clean and clear demonstration, whereas there is just no ambiguity once it can start to produce this new knowledge. Now, of course, whether it’s going to happen or not is an open question. I personally believe it’s plausible. I am not 100 percent sure what’s going to happen, but I believe it is plausible that it will happen.

But then there might be another question, which is what happens if the proof that it produces becomes inscrutable to human beings. Mathematics is not only this abstract thing, but it’s also a language between humans. Of course, at the end of the day, you can come back to the axioms, but that’s not the way we humans do mathematics.

So, what happens if, let’s say, GPT-5 proves the Riemann hypothesis and it is formally proved? Maybe it gives the proof in the LEAN language, which is a formalization of mathematics, and you can formally verify that the proof is correct. But no human being is able to understand the concepts that were introduced.
What does it mean? Is the Riemann hypothesis really proven? I guess it is proven, but is that really what we human beings wanted? So this kind of question might be on the horizon. And that I think ultimately might be the real test of intelligence.

Ashley Llorens: Let’s stick with this category of the limitations of the model. And you kind of drew a line here in terms of producing new knowledge. You offered one example of that as proving mathematical theorems. What are some of the other limitations that you’ve discovered?

Sébastien Bubeck: So, GPT-4 is a large language model which was trained on the next –word-prediction objective function. So, what does it mean? It just means you give it a partial text and you’re trying to predict what is going to be the next word in that partial text. Once you want to generate content, you just keep doing that on the text that you’re producing. So, you’re producing words one by one. Now, of course, it’s a question that I have been reflecting upon myself, once I saw GPT-4. It’s a question whether human beings are thinking like this. I mean it doesn’t feel like it. It feels like we’re thinking a little bit more deeply.
We’re thinking a little bit more in advance of what we want to say. But somehow, as I reflect, I’m not so sure, at least when I speak, verbally, orally, maybe I am just coming up every time with the next word. So, this is a very interesting aspect. But the key point is certainly when I’m doing mathematics, I think I am thinking a little bit more deeply.

And I’m not just trying to see what is the next step, but I’m trying to come up with a whole plan of what I want to achieve. And right now the system is not able to do this kind of long-term planning. And we can give a very simple experiment that shows this maybe. My favorite one is, let’s say you have a very simple arithmetic equality—three times seven plus 21 times 27 equals something.

So this is part of the prompt that you give to GPT-4. And now you just ask, okay, you’re allowed to modify one digit in this so that the end result is modified in a certain way. Which one do you choose? So, the way to solve this problem is that you have to think.

You have to try. Okay, what if I were to modify the first digit? What would happen if I were to modify the second digit? What would happen? And GPT-4 is not able to do that. GPT-4 is not able to think ahead in this way. What it will say is just: I think if you modify the third digit, just randomly, it’s going to work. And it just tries and it fails. And the really funny aspect is that once it starts feigning, GPT-4, this becomes part of its context, which in a way becomes part of its truth. So, the failure becomes part of its truth and then it will do anything to justify it.

It will keep making mistakes to keep justifying it. So, these two aspects, the fact that it cannot really plan ahead and that once it makes mistakes, it just becomes part of its truths. These are very, very serious limitations, in particular for mathematics. This makes it a very uneven system, once you approach mathematics.

Ashley Llorens: You mentioned something that’s different about machine learning the way it’s conceptualized in this kind of generative AI regime, which is fundamentally different than what we’ve typically thought about as machine learning, where you’re optimizing an objective function with a fairly narrow objective versus when you’re trying to actually learn something about the structure of the data, albeit through this next word prediction or some other way.

What do you think about that learning mechanism? Are there any limitations of that?

Sébastien Bubeck: This is a very interesting question. Maybe I just want to backtrack for a second and just acknowledge that what happened there is kind of a miracle. Nobody, I think nobody in the world, perhaps, except OpenAI, expected that intelligence would emerge from this next word prediction framework just on a lot of data.

I mean, this is really crazy, if you think about it now, the way I have justified it to myself recently is like this. So, I think it is agreed that deep learning is what powers the GPT4 training. You have a big neural network that you’re training with gradient descent, just trying to fiddle with the parameters.

So, it is agreed that deep learning is this hammer, that if you give it a dataset, it will be able to extract the latent structure of that dataset. So, for example, the first breakthrough that happened in deep learning, a little bit more than ten years ago, was the Alexa.NET moment, where they trained a neural network to basically classify cats, dogs, cars accessorized with images.

And when you train this network, what happens is that you have these edge detectors that emerge on the first few layers of the neural network. And, nothing in the objective function told you that you have to come up with edge detector. This was an emergent property. Why? Because it makes sense. The structure of an image is to combine those edges to create geometric shapes.

Right now, I think what’s happening and we have seen this more and more with the large language models, is that there are more and more emerging properties that happen as you scale up the size of the network and the size of the data. Now what I believe is happening is that in the case of GPT-4 they gave it such a big dataset, so diverse with so many complex parameters in it, that the only way to make sense of it, the only latent structure that unifies all of this data is intelligence.

The only way to make sense of the data was for the system to become intelligent. This is kind of a crazy sentence. And I expect the next few years, maybe even the next few decades, will try to make sense of whether this sentence is correct or not. And hopefully, human beings are intelligent enough to make sense of that sentence.

I don’t know right now. I just feel like it’s a reasonable hypothesis that this is what happened there. And so in a way, you can say maybe there is no limitation to the next-word-prediction framework. So that’s one perspective. The other perspective is, actually, the next –word-prediction, token framework is very limiting, at least at generation time.

At least once you start to generate new sentences, you should go beyond a little bit if you want to have a planning aspect, if you want to be able to revisit mistakes that you made. So, there we believe that at least at generation time, you need to have a slightly different system. But maybe in terms of training, in terms of coming up with intelligence in the first place, maybe this is a fine way to do it.

Ashley Llorens: And maybe I’m kind of inspired to ask you a somewhat technical question, though. Yeah. Where I think one aspect of our previous notion of intelligence and maybe still the current notion of intelligence for some is this aspect of compression, the ability to take something complex and make it simple, maybe thinking grounded in Occam’s Razor, where we want to generate the simplest explanation of the data in the thing, some of the things you’re saying and some of the things we’re seeing in the model kind of go against that intuition.

So talk to me a little bit about that.

Sébastien Bubeck: Absolutely. So, I think this is really exemplified well in the project that we did here at Microsoft Research a year ago, which we called Lego. So let me tell you about this very briefly, because it will really get to the point of what you’re trying to say. So, let’s say you want train an AI system that can solve middle school systems of linear equations.

So, maybe it’s X plus Y equals Z, three X minus two, Y equals one, and so on. You have three equations with two variables. And you want to train a neural network that takes in this system of equation and outputs the answer for it. The classical perspective, the Occam’s Razor perspective would be collected dataset with lots of equations like this train the system to solve those linear equation.

And there you go. This is a way you have the same kind of distribution at training time and that they start now. What this new paradigm of deep learning and in particular of large language models would say is instead, even though your goal is to solve systems of linear equations for middle school students instead of just training data, middle school systems offline.

Now, we’re going to collect a hugely diverse list of data maybe we’re going to do next. One prediction not only on the systems of linear equation, but also on all of Wikipedia. So, this is now a very concrete experiment. You have to learn networks. Neural network A, train on the equations. Neural network B, train on the equations, plus Wikipedia. And any kind of classical thinking would tell you that neural network B is going to do worse because it has to do more things, it’s going to get more confused. It’s not the simplest way to solve the problem. But lo and behold, if you actually run the experiment for real, Network B is much, much, much better than network A. Now I need to quantify this a little bit. Network A, if it was trained with systems of linear regression with three variables, is going to be fine on systems of linear regression with three variables.

But as soon as you ask it four variables or five variables, it’s not able to do it. It didn’t really get to the essence of what it means to solve the linear equations, whereas Network B, it’s not only subsystems of equation with three variables, but it also does four, it also does five and so on.

Now the question is why? What’s going on? Why is it that making the thing more complicated, going against Occam’s Razor, why is that a good idea? And the extremely naive perspective, which in fact some people have said because it is so mysterious, would be maybe to read the Wikipedia page on solving systems of linear equation.

But of course, that’s not what happened. And this is another aspect of this whole story, which is anthropomorphication of the system is a big danger. But let’s not get into that right now. But the point is that’s not at all the reason why we became good at solving systems of linear equations.

It’s rather that it had this very diverse data and it forced it to come up with unifying principles, more canonical, component of intelligence. And then it’s able to compose this canonical component of intelligence to solve the task at hand.

Ashley Llorens: I want to go back to something you said much earlier around natural evolution versus this notion of artificial evolution. And I think that starts to allude to where I think you want to take this field next, at least in terms of your study and your group. And that is, focusing on the aspect of emergence and how intelligence emerges.

So, what do you see as the way forward from this point, from your work with Lego that you just described for you and for the field?

Sébastien Bubeck: Yes, absolutely. So, I would argue that maybe we need a new name for machine learning in a way. GPT-4 and GPT-3 and all those other large language models, in some ways, it’s not machine learning anymore. And by that I mean machine learning is all about how do you teach a machine a very well-defined task? Recognize cats and dogs, something like that. But here, that’s not what we’re doing. We’re not trying to teach it a narrow task. We’re trying to teach it everything. And we’re not trying to mimic how a human would learn. This is another point of confusion. Some people say, oh, but it’s learning language, but using more tags than any human would ever see.

But that’s kind of missing the point. The point is we’re not trying to mimic human learning. And that’s why maybe learning is not the right word anymore. We’re really trying to mimic something which is more akin to evolution. We’re trying to mimic the experience of millions, billions of entities that interact with the world. In this case, the world is the data that humans produced.

So, it’s a very different style. And I believe the reason why all the tools that we have introduced in machine learning are kind of useless and almost irrelevant in light of GPT-4 is because it’s a new field. It’s something that needs new tools to be defined. So we hope to be at the forefront of that and we want to introduce those new tools.

And of course, we don’t know what it’s going to look like, but the avenue that we’re taking to try to study this is to try to understand emergence. So emergence again is this phenomenon that as you scale up the network and the data, suddenly there are new properties that emerge at every scale. Google had this experiment where they scaled up their large language models from 8 billion to 60 billion to 500 billion.
And at 8 billion, it’s able to understand language. And it’s able to do a little bit of arithmetic at 60 billion. Suddenly it’s able to translate between languages. Before it couldn’t translate. At 60 billion parameters, suddenly it can translate. At 500 billion suddenly it can explain jokes. Why can it suddenly explain jokes?
So, we really would like to understand this. And there is another field out there that has been grappling with emergence for a long time that we’re trying to study systems of very complex particles interacting with each other and leading to some emergent behaviors.

What is this field? It’s physics. So, what we would like to propose is let’s study the physics of AI or the physics of AGI, because in a way, we are really seeing this general intelligence now. So, what would it mean to study the physics of AGI? What it would mean is, let’s try to borrow from the methodology that physicists have used for the last few centuries to make sense of reality.

And what (were) those tools? Well, one of them was to run a very controlled experiment. If you look at the waterfall and you observe the water which is flowing and is going in all kinds of ways, and you go look at it in the winter and it’s frozen. Good luck to try to make sense of the phases of water by just staring at the waterfall, GPT-4 or LAMDA or the flash language model.

These are all waterfalls. What we need are much more small scale controlled experiments where we know we have pure water. It’s not being tainted by the stone, by the algae. We need those controlled experiments to make sense of it. And LEGO is one example. So that’s one direction that we want to take. But in physics there is another direction that you can take, which is to build toy mathematical models of the real world.

You try to abstract away lots of things, and you’re left with a very simple mathematical equation that you can study. And then you have to go back to really experiment and see whether the prediction from the toy mathematical model tells you something about the real experiment. So that’s another avenue that we want to take. And then we made some progress recently also with interns at (Microsoft Research).

So, we have a paper which is called Learning Threshold Units. And here really we’re able to understand how does the most basic element, I don’t want to say intelligence, but the most basic element of reasoning emerges in those neural networks. And what is this most basic element of reasoning? It’s a threshold unit. It’s something that takes as input some value.

And if the value is too small, then it just turns it to zero. And this emergence already, it’s a very, very complicated phenomenon. And we were able to understand the non convex dynamics at play and connected to which is called the edge of stability, which is all very exciting. But the key point is that we have a toy mathematical model and there in essence what we were able to do is to say that emergence is related to the instability in training, which is very surprising because usually in classical machine learning, instability, something that you do not want, you want to erase all the instabilities.

And somehow through this physics of AI approach, where we have a toy mathematical model, we are able to say actually it’s instability in training that you’re seeing, that everybody is seeing for decades now, it actually matters for learning and for emergence. So, this is the first edge that we took.

Ashley Llorens: I want to come back to this aspect of interaction and want to ask you if you see fundamental limitations with this whole methodology around certain kinds of interactions. So right now we’ve been talking mostly about these models interacting with information in information environments, with information that people produce, and then producing new information behind that.

The source of that information is actual humans. So, I want to know if you see any limitations or if this is an aspect of your study, how we make these models better at interacting with humans, understanding the person behind the information produced. And after you do that, I’m going to come back and we’ll ask the same question of the natural world in which we as humans reside.

Sébastien Bubeck: Absolutely. So, this is one of the emergent properties of GPT-4 to put it very simply, that not only can it interact with information, but it can actually interact with humans, too. You can communicate with it. You can discuss, and you’re going to have very interesting discussions. In fact, some of my most interesting discussions in the last few months were with GPT-4.

So this is surprising. Not at all something we would have expected, But in there it’s there. Not only that, but it also has a theory of mind. So GPT-4 is able to reason about what somebody is thinking, what somebody is thinking about, what somebody else is thinking, and so on. So, it really has a very sophisticated theory of mind. There was recently a paper saying that ChatGPT is roughly at the level of a seven-year-old in terms of its theory of mind. For GPT-4, I cannot really distinguish from an adult. Just to give you an anecdote, I don’t know if I should say this, but one day in the last few months, I had an argument with my wife and she was telling me something.

And I just didn’t understand what she wanted from me. And I just talked with GPT-4 I explained the situation. And that’s what’s going on. What should I be doing? And the answer was so detailed, so thoughtful. I mean, I’m really not making this up. This is absolutely real. I learned something from GPT-4 about human interaction with my wife.

This is as real as it gets. And so, I can’t see any limitation right now in terms of interaction. And not only can it interact with humans, but it can also interact with tools. And so, this is the premise in a way of the new Bing that was recently introduced, which is that this new model, you can tell it “hey, you know what, you have access to a search engine.”

“You can use Bing if there is some information that you’re missing and you need to find it out, please make a Bing search.” And somehow natively, this is again, an emergent property. It’s able to use a search engine and make searches when it needs to, which is really, really incredible. And not only can it use those tools which are well-known, but you can also make up tools.

You can tell you can say, hey, I invented some API. Here is what the API does. Now please solve me problem XYZ using that API and it’s able to do it native. It’s able to understand your description in natural language of what the API that you built is doing and it’s able to leverage its power and use it.

This is really incredible and opens so many directions.

Ashley Llorens: We certainly see some super impressive capabilities like the new integration with Bing, for example. We also see some of those limitations come into play. Tell me about your exploration of those in this context.

Sébastien Bubeck: So, one keyword that didn’t come up yet and which is going to drive forward is a conversation at least online and on Twitter is hallucinations. So those models, GPT-4, still does hallucinate a lot. And in a way, for good reason, it’s on the spectrum where on the one hand you have bad hallucination, completely making up facts when which are contrary to the real facts in the real world.

But on the other end, you have creativity. When you create, when you generate new things, you are, in a way, hallucinating. It’s good hallucinations, but still these are hallucinations. So having a system which can both be creative, but does not hallucinate at all—it’s a very delicate balance. And GPT-4 did not solve that problem yet. It made a lot of progress, but it didn’t solve it yet.

So that’s still a big limitation, which the world is going to have to grapple with. And I think in the new Bing it’s very clearly explained that it is still making mistakes from time to time and that you need to double check the result. I still think the rough contours of what GPT-4 says and the new Bing says is really correct.
And it’s a very good first draft most of the time, and you can get started with that. But then, you need to do your research and it cannot be used for critical missions yet. Now what’s interesting is GPT-4 is also intelligent enough to look over itself. So, once it produced a transcript, you can ask another instance of GPT-4 to look over what the first instance did and to check whether there is any hallucination.

This works particularly well for what I would call in-context hallucination. So, what would be in-context hallucination? Let’s say you have a text that you’re asking it to summarize and maybe in the summary, it invents something that was not out there. Then the other instance of GPT-4 will immediately spot it. So that’s basically in-context hallucination.

We believe they can be fully solved soon. The open-world type of hallucination When you ask anything, for example, in our paper we ask: where is the McDonald’s at Sea-Tac, at the airport in Seattle. And it responds: gate C2. And the answer is not C2. The answer, it’s B3. So this type of open-world hallucination—it’s much more difficult to resolve.

And we don’t know yet how to do that. Exactly.

Ashley Llorens: Do you see a difference between a hallucination and a factual error?

Sébastien Bubeck: I have to think about this one. I would say that no, I do not really see a difference between the hallucination and the factual error. In fact, I would go as far as saying that when it’s making arithmetic mistakes, which again, it still does, when it adds two numbers, you can also view it as some kind of hallucination.

And by that I mean it’s kind of a hallucination by omission. And let me explain what I mean. So, when it does an arithmetic calculation, you can actually ask it to print each step and that improves the accuracy. It does a little bit better if it has to go through all the steps and this makes sense from the next word prediction framework.

Now what happens is very often it will skip a step. It would kind of forget something. This can be viewed as a kind of hallucination. It just, hallucinated that this step is not necessary and that it can move on to the next stage immediately. And so, this kind of factual error or like in this case in a reasoning error if you want, they are all related to the same concept of hallucination. There could be many ways to resolve those hallucinations.

Maybe we want to look inside the model a little bit more. Maybe we want to change the training pipeline a little bit. Maybe the reinforcement learning with human feedback can help. All of these are small patches and still I want to make it clear to the audience that it’s still an academic open problem whether any of those directions can eventually fix it or is it a fatal error for large language models that will never be fixed.
We do not know the answer to that question.

Ashley Llorens: I want to come back to this notion of interaction with the natural world.

As human beings, we learn about the natural world through interaction with it. We start to develop intuitions about things like gravity, for example. And there is an argument or debate right now in the community as to how much of that knowledge of how to interact with the natural world is encoded and learnable from language and the kinds of information inputs that we put into the model versus how much actually needs to be implicit, explicitly encoded in an architecture or just learned through interaction with the world.

What do you see here? Do you see a fundamental limitation with this kind of architecture for that purpose?

Sébastien Bubeck: I do think that there is a fundamental limit. I mean, that there is a fundamental limitation in terms of the current structure of the pipeline. And I do believe it’s going to be a big limitation once you ask the system to discover new facts. So, what I think is the next stage of evolution for the systems would be to hook it up with a simulator of sorts so that the system at training time when it’s going through all of the web, is going through all of the data produced by humanity.

So, it realizes, oh, maybe I need more data of a certain type, then we want to give it access to a simulator so that it can produce its own data, it can run experiments which is really what babies are doing. Infants—they run experiments when they play with a ball, when they look at their hand in front of their face.

This is an experiment. So, we do need to give access to the system a way to do experiments. Now, the problem of this is you get into this little bit of a dystopian discussion of whether do we really want to give these systems which are super intelligent in some way, do we want to give them access to simulators?
Aren’t we afraid that they will become super-human in every way if some of the experiments that they can run is to run code, is to access the internet? There are lots of questions about what could happen. And it’s not hard to imagine what could go wrong there.

Ashley Llorens: It’s a good segue into maybe a last question or topic to explore, which comes back to this phrase AGI—artificial general intelligence. In some ways, there’s kind of a lowercase version of that where we talk towards more generalizable kinds of intelligence. That’s the regime that we’ve been exploring. Then there’s a kind of a capital letter version of that, which is almost like a like a sacred cow or a kind of dogmatic pursuit within the AI community. So, what does that capital letter phrase AGI, mean to you? And maybe part B of that is: is our classic notion of AGI the right goal for us to be aiming for?

Sébastien Bubeck: Before interacting with GPT-4, to me, AGI was this unachievable dream. Some think it’s not even clear whether it’s doable. What does it even mean? And really by interacting with GPT-4, I suddenly had the realization that actually general intelligence, it’s something very concrete.

It’s able to understand any kind of topics that you bring up. It is going to be able to reason about any of the things that you want to discuss. It can bring up information, it can use tools, it can interact with humans, it can interact with an environment. This is general intelligence. Now, you’re totally right in encoding it “lowercase” AGI.

Why is it not uppercase AGI? Because it’s still lacking some of the fundamental aspects, two of them, which are really, really important. One is memory. So, every new session with GPT-4 is a completely fresh tabula rasa session. It’s not remembering what you did yesterday with it. And it’s something which is emotionally hard to take because you kind of develop a relationship with the system.

As crazy as it sounds, that’s really what happens. And so you’re kind of disappointed that it doesn’t remember all the good times that you guys had together. So this is one aspect. The other one is the learning. Right now, you cannot teach it new concepts very easily. You can turn the big crank of retraining the model.

Sure, you can do that, but I’ll give you the example of using a new API. Tomorrow, you have to explain it again. So, of course, learning and memory, those two things are very, very related, as I just explained. So, this is one huge limitation to me.

If it had that, I think it would qualify as uppercase AGI. Now, not everybody would agree even with that, because many people would say, no, it needs to be embodied, to have real world experience. This becomes a philosophical question. Is it possible to have something that you would call a generally intelligent being that only lives in the digital world?

I don’t see any problem with that, honestly. I cannot see any issue with this. Now, there is another aspect. Once you get into this philosophical territory, which is right now the systems have no intrinsic motivation. All they want to do is to generate the next token. So, is that also an obstruction to having something which is a general intelligence?

Again, to me this becomes more philosophical than really technical, but maybe there is some aspect which is technical there. Again, if you start to hook up the system to simulate those to run their own experiments, then certainly they maybe have some intrinsic motivation to just improve themselves. So maybe that’s one technical way to resolve the question. I don’t know.

Ashley Llorens: That’s interesting. And I think there’s a word around that phrase in the community. Agent. Or seeing “agentic” or goal-oriented behaviors. And that is really where you start to get into the need for serious sandboxing or alignment or other kinds of guardrails for the system that actually starts to exhibit goal-oriented behavior.

Sébastien Bubeck: Absolutely. Maybe one other point that I want to bring up about AGI, which I think is confusing a lot of people. Somehow when people hear general intelligence, they want something which is truly general that could grapple with any kind of environment. And not only that, but maybe that grapples with any kind of environment and does so in a sort of optimal way.

This universality and optimality, I think, are completely irrelevant to intelligence. Intelligence has nothing to do with universality or optimality. We as human beings are notoriously not universal. I mean, you change a little bit the condition of your environment, and you’re going to be very confused for a week. It’s going to take you months to adapt.

So, we are very, very far from universal and I think I don’t need to tell anybody that we’re very far from being optimal. The number of crazy decisions that we make every second is astounding. So, we’re not optimal in any way. So, I think it is not realistic to try to have an AGI that would be universal and optimal. And it’s not even desirable in any way, in my opinion. So that’s maybe not achievable and not even realistic, in my opinion.

Ashley Llorens: Is there an aspect of complementarity that we should be striving for in, say, a refreshed version of AGI or this kind of long-term goal for AI?

Sébastien Bubeck: Yeah, absolutely. But I don’t want to be here in this podcast today and try to say what I view in terms of this question, because I think it’s really a community that should come together and discuss this in the coming weeks, months, years, and come together with where do we want to go, where the society want to go, and so on.

I think it’s a terribly important question. And we should not dissociate our futuristic goal with the technical innovation that we’re trying to do day-to-day. We have to take both into account. But I imagine that this discussion will happen and we will know a lot more a year from now, hopefully.

Ashley Llorens: Thanks, Sébastien. Just a really fun and fascinating discussion. Appreciate your time today.

Sébastien Bubeck: Yeah, thanks, Ashley it was super fun.

The post AI Frontiers: The Physics of AI with Sébastien Bubeck appeared first on Microsoft Research.

Read More

Research Focus: Week of March 6, 2023

Research Focus: Week of March 6, 2023

Microsoft Research Focus 11 edition, week of March 06, 2023

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft.

NEW RESEARCH

Hide and seek with Spectres

Attack methods like Spectre exploit speculative execution, one of the key performance optimizations of modern CPUs. Microsoft researchers are working on a novel testing tool that can automatically detect speculative leaks in commercial (back-box) CPUs. However, until now the testing process has been slow, which has hindered in-depth testing campaigns and the discovery of new classes of leakage.

In a new paper: Hide and Seek with Spectres: Efficient discovery of speculative information leaks with random testing, researchers from Microsoft and academic collaborators identify the root causes of the performance limitations in existing approaches—and propose techniques to overcome them. These techniques improve the testing speed over the state of the art by up to two orders of magnitude.

These improvements enabled the researchers to run a testing campaign of unprecedented depth on Intel and AMD CPUs. In the process, they discovered two types of previously unknown speculative leaks (affecting string comparison and division) that have escaped previous manual and automatic analyses.

The paper that describes the novel techniques will appear at IEEE Symposium on Security and Privacy 2023.

Spotlight: On-Demand EVENT

Microsoft Research Summit 2022

On-Demand
Watch now to learn about some of the most pressing questions facing our research community and listen in on conversations with 120+ researchers around how to ensure new technologies have the broadest possible benefit for humanity.

PODCAST

Microsoft Research’s Philipp Witte on improving carbon sequestration with AI

Reducing carbon dioxide in the atmosphere could play an important role in minimizing climate change. Carbon sequestration – the process of locking carbon dioxide in deep underground reservoirs – is a developing technology that could make a meaningful contribution if it were deployed at scale. Deep learning AI technologies can improve the models required to develop these reservoirs, which could help scale up sequestration projects to a meaningful level.

Philipp Witte, a researcher with Microsoft Research for Industry, recently chatted with Fixing the Future from IEEE Spectrum about how AI can help improve carbon sequestration. For example, AI can facilitate computationally intensive simulations and manage complex modeling requirements more efficiently than conventional methods.

Tune in to the podcast for a lively discussion at the intersection of AI and decarbonization.


OPPORTUNITY

Microsoft Research Data Science Summer School – Apply now

Microsoft Research New York City’s Data Science Summer School (DS3) is an intensive hands-on introduction to data science for local undergraduate students interested in attending graduate school in computer science and related fields. The curriculum includes coursework in data science and group research projects. This year’s program runs from May 30 to June 23, and applications will be accepted through April 11.

The course is taught by leading scientists at Microsoft Research New York City and each participant will receive a laptop and $3,000 stipend. Sessions will cover tools and techniques for acquiring, cleaning, and utilizing real-world data for research purposes. The course also serves as an introduction to problems in applied statistics and machine learning, and will cover the theory behind simple but effective methods for supervised and unsupervised learning.

Applicants must be currently enrolled in an undergraduate program in the New York City area. We strongly encourage people from diverse, non-traditional, and under-represented backgrounds in STEM to apply.

Check out the program details and apply today.

The post Research Focus: Week of March 6, 2023 appeared first on Microsoft Research.

Read More

Responsible AI: The research collaboration behind new open-source tools offered by Microsoft

Responsible AI: The research collaboration behind new open-source tools offered by Microsoft

Flowchart showing how responsible AI tools are used together for targeted debugging of machine learning models: the Responsible AI Dashboard for the identification of failures; followed by the Responsible AI Dashboard and Mitigations Library for the diagnosis of failures; then the Responsible AI Mitigations Library for mitigating failures; and lastly the Responsible AI Tracker for tracking, comparing, and validating mitigation techniques from which an arrow points back to the identification phase of the cycle  to indicate the repetition of the process as models and data continue to evolve during the ML lifecycle.

As computing and AI advancements spanning decades are enabling incredible opportunities for people and society, they’re also raising questions about responsible development and deployment. For example, the machine learning models powering AI systems may not perform the same for everyone or every condition, potentially leading to harms related to safety, reliability, and fairness. Single metrics often used to represent model capability, such as overall accuracy, do little to demonstrate under which circumstances or for whom failure is more likely; meanwhile, common approaches to addressing failures, like adding more data and compute or increasing model size, don’t get to the root of the problem. Plus, these blanket trial-and-error approaches can be resource intensive and financially costly.

Through its Responsible AI Toolbox, a collection of tools and functionalities designed to help practitioners maximize the benefits of AI systems while mitigating harms, and other efforts for responsible AI, Microsoft offers an alternative: a principled approach to AI development centered around targeted model improvement. Improving models through targeting methods aims to identify solutions tailored to the causes of specific failures. This is a critical part of a model improvement life cycle that not only includes the identification, diagnosis, and mitigation of failures but also the tracking, comparison, and validation of mitigation options. The approach supports practitioners in better addressing failures without introducing new ones or eroding other aspects of model performance.

“With targeted model improvement, we’re trying to encourage a more systematic process for improving machine learning in research and practice,” says Besmira Nushi, a Microsoft Principal Researcher involved with the development of tools for supporting responsible AI. She is a member of the research team behind the toolbox’s newest additions: the Responsible AI Mitigations Library, which enables practitioners to more easily experiment with different techniques for addressing failures, and the Responsible AI Tracker, which uses visualizations to show the effectiveness of the different techniques for more informed decision-making.

Targeted model improvement: From identification to validation

The tools in the Responsible AI Toolbox, available in open source and through the Azure Machine Learning platform offered by Microsoft, have been designed with each stage of the model improvement life cycle in mind, informing targeted model improvement through error analysis, fairness assessment, data exploration, and interpretability.

For example, the new mitigations library bolsters mitigation by offering a means of managing failures that occur in data preprocessing, such as those caused by a lack of data or lower-quality data for a particular subset. For tracking, comparison, and validation, the new tracker brings model, code, visualizations, and other development components together for easy-to-follow documentation of mitigation efforts. The tracker’s main feature is disaggregated model evaluation and comparison, which breaks down model performance by data subset to present a clearer picture of a mitigation’s effects on the intended subset, as well as other subsets, helping to uncover hidden performance declines before models are deployed and used by individuals and organizations. Additionally, the tracker allows practitioners to look at performance for subsets of data across iterations of a model to help practitioners determine the most appropriate model for deployment.

photo of Besmira Nushi smiling for the camera

“Data scientists could build many of the functionalities that we offer with these tools; they could build their own infrastructure,” says Nushi. “But to do that for every project requires a lot of effort and time. The benefit of these tools is scale. Here, they can accelerate their work with tools that apply to multiple scenarios, freeing them up to focus on the work of building more reliable, trustworthy models.”

Besmira Nushi, Microsoft Principal Researcher

Building tools for responsible AI that are intuitive, effective, and valuable can help practitioners consider potential harms and their mitigation from the beginning when developing a new model. The result can be more confidence that the work they’re doing is supporting AI that is safer, fairer, and more reliable because it was designed that way, says Nushi. The benefits of using these tools can be far-reaching—from contributing to AI systems that more fairly assess candidates for loans by having comparable accuracy across demographic groups to traffic sign detectors in self-driving cars that can perform better across conditions like sun, snow, and rain.

Converting research into tools for responsible AI

Creating tools that can have the impact researchers like Nushi envision often begins with a research question and involves converting the resulting work into something people and teams can readily and confidently incorporate in their workflows.

“Making that jump from a research paper’s code on GitHub to something that is usable involves a lot more process in terms of understanding what is the interaction that the data scientist would need, what would make them more productive,” says Nushi. “In research, we come up with many ideas. Some of them are too fancy, so fancy that they cannot be used in the real world because they cannot be operationalized.”

Multidisciplinary research teams consisting of user experience researchers, designers, and machine learning and front-end engineers have helped ground the process as have the contributions of those who specialize in all things responsible AI. Microsoft Research works closely with the incubation team of Aether, the advisory body for Microsoft leadership on AI ethics and effects, to create tools based on the research. Equally important has been partnership with product teams whose mission is to operationalize AI responsibly, says Nushi. For Microsoft Research, that is often Azure Machine Learning, the Microsoft platform for end-to-end ML model development. Through this relationship, Azure Machine Learning can offer what Microsoft Principal PM Manager Mehrnoosh Sameki refers to as customer “signals,” essentially a reliable stream of practitioner wants and needs directly from practitioners on the ground. And, Azure Machine Learning is just as excited to leverage what Microsoft Research and Aether have to offer: cutting-edge science. The relationship has been fruitful.

As the current Azure Machine Learning platform made its debut five years ago, it was clear tooling for responsible AI was going to be necessary. In addition to aligning with the Microsoft vision for AI development, customers were seeking out such resources. They approached the Azure Machine Learning team with requests for explainability and interpretability features, robust model validation methods, and fairness assessment tools, recounts Sameki, who leads the Azure Machine Learning team in charge of tooling for responsible AI. Microsoft Research, Aether, and Azure Machine Learning teamed up to integrate tools for responsible AI into the platform, including InterpretML for understanding model behavior, Error Analysis for identifying data subsets for which failures are more likely, and Fairlearn for assessing and mitigating fairness-related issues. InterpretML and Fairlearn are independent community-driven projects that power several Responsible AI Toolbox functionalities.

Before long, Azure Machine Learning approached Microsoft Research with another signal: customers wanted to use the tools together, in one interface. The research team responded with an approach that enabled interoperability, allowing the tools to exchange data and insights, facilitating a seamless ML debugging experience. Over the course of two to three months, the teams met weekly to conceptualize and design “a single pane of glass” from which practitioners could use the tools collectively. As Azure Machine Learning developed the project, Microsoft Research stayed involved, from providing design expertise to contributing to how the story and capabilities of what had become Responsible AI dashboard would be communicated to customers.

After the release, the teams dived into the next open challenge: enabling practitioners to better mitigate failures. Enter the Responsible AI Mitigations Library and the Responsible AI Tracker, which were developed by Microsoft Research in collaboration with Aether. Microsoft Research was well-equipped with the resources and expertise to figure out the most effective visualizations for doing disaggregated model comparison (there was very little previous work available on it) and navigating the proper abstractions for the complexities of applying different mitigations to different subsets of data with a flexible, easy-to-use interface. Throughout the process, the Azure team provided insight into how the new tools fit into the existing infrastructure.

With the Azure team bringing practitioner needs and the platform to the table and research bringing the latest in model evaluation, responsible testing, and the like, it is the perfect fit, says Sameki.

An open-source approach to tooling for responsible AI

While making these tools available through Azure Machine Learning supports customers in bringing their products and services to market responsibly, making these tools open source is important to cultivating an even larger landscape of responsibly developed AI. When release ready, these tools for responsible AI are made open source and then integrated into the Azure Machine Learning platform. The reasons for going with an open-source-first approach are numerous, say Nushi and Sameki:

  • freely available tools for responsible AI are an educational resource for learning and teaching the practice of responsible AI;
  • more contributors, both internal to Microsoft and external, add quality, longevity, and excitement to the work and topic; and
  • the ability to integrate them into any platform or infrastructure encourages more widespread use.

The decision also represents one of the Microsoft AI principles in action—transparency.

photo of Mehrnoosh Sameki smiling for the camera

“In the space of responsible AI, being as open as possible is the way to go, and there are multiple reasons for that,” says Sameki. “The main reason is for building trust with the users and with the consumers of these tools. In my opinion, no one would trust a machine learning evaluation technique or an unfairness mitigation algorithm that is unclear and close source. Also, this field is very new. Innovating in the open nurtures better collaborations in the field.”

Mehrnoosh Sameki, Microsoft Principal PM Manager

Looking ahead

AI capabilities are only advancing. The larger research community, practitioners, the tech industry, government, and other institutions are working in different ways to steer these advancements in a direction in which AI is contributing value and its potential harms are minimized. Practices for responsible AI will need to continue to evolve with AI advancements to support these efforts.

For Microsoft researchers like Nushi and product managers like Sameki, that means fostering cross-company, multidisciplinary collaborations in their continued development of tools that encourage targeted model improvement guided by the step-by-step process of identification, diagnosis, mitigation, and comparison and validation—wherever those advances lead.

“As we get better in this, I hope we move toward a more systematic process to understand what data is actually useful, even for the large models; what is harmful that really shouldn’t be included in those; and what is the data that has a lot of ethical issues if you include it,” says Nushi. “Building AI responsibly is crosscutting, requiring perspectives and contributions from internal teams and external practitioners. Our growing collection of tools shows that effective collaboration has the potential to impact—for the better—how we create the new generation of AI systems.”

The post Responsible AI: The research collaboration behind new open-source tools offered by Microsoft appeared first on Microsoft Research.

Read More