Innovations in AI: Brain-inspired design for more capable and sustainable technology

Innovations in AI: Brain-inspired design for more capable and sustainable technology

Diagram illustrating four common neural connectivity patterns in the biological neural networks: Feedforward, Mutual, Lateral, and Feedback. Each pattern consists of circles representing neurons and arrows representing synapses. 

As AI research and technology development continue to advance, there is also a need to account for the energy and infrastructure resources required to manage large datasets and execute difficult computations. When we look to nature for models of efficiency, the human brain stands out, resourcefully handling complex tasks. Inspired by this, researchers at Microsoft are seeking to understand the brain’s efficient processes and replicate them in AI. 

At Microsoft Research Asia (opens in new tab), in collaboration with Fudan University (opens in new tab), Shanghai Jiao Tong University (opens in new tab), and the Okinawa Institute of Technology (opens in new tab), three notable projects are underway. One introduces a neural network that simulates the way the brain learns and computes information; another enhances the accuracy and efficiency of predictive models for future events; and a third improves AI’s proficiency in language processing and pattern prediction. These projects, highlighted in this blog post, aim not only to boost performance but also significantly reduce power consumption, paving the way for more sustainable AI technologies. 

CircuitNet simulates brain-like neural patterns 

Many AI applications rely on artificial neural networks, designed to mimic the brain’s complex neural patterns. These networks typically replicate only one or two types of connectivity patterns. In contrast, the brain propagates information using a variety of neural connection patterns, including feedforward excitation and inhibition, mutual inhibition, lateral inhibition, and feedback inhibition (Figure 1). These networks contain densely interconnected local areas with fewer connections between distant regions. Each neuron forms thousands of synapses to carry out specific tasks within its region, while some synapses link different functional clusters—groups of interconnected neurons that work together to perform specific functions. 

Diagram illustrating four common neural connectivity patterns in the biological neural networks: Feedforward, Mutual, Lateral, and Feedback. Each pattern consists of circles representing neurons and arrows representing synapses. 
Figure 1: The four neural connectivity patterns in the brain. Each circle represents a neuron, and each arrow represents a synapse. 

Inspired by this biological architecture, researchers have developed CircuitNet, a neural network that replicates multiple types of connectivity patterns. CircuitNet’s design features a combination of densely connected local nodes and fewer connections between distant regions, enabling enhanced signal transmission through circuit motif units (CMUs)—small, recurring patterns of connections that help to process information. This structure, shown in Figure 2, supports multiple rounds of signal processing, potentially advancing how AI systems handle complex information. 

Diagram illustrating CircuitNet's architecture. On the left, diagrams labeled “Model Inputs” and “Model Outputs” show that CircuitNet can handle various input forms and produce corresponding outputs. The middle section, labeled “CircuitNet”, depicts several interconnected blocks called Circuit Motif Units (CMUs for short), which maintain locally dense communications through direct connections and globally sparse communications through their input and output ports. On the right, a detailed view of a single CMU reveals densely interconnected neurons, demonstrating how each CMU models a universal circuit motif.
Figure 2. CircuitNet’s architecture: A generic neural network performs various tasks, accepts different inputs, and generates corresponding outputs (left). CMUs keep most connections local with few long-distance connections, promoting efficiency (middle). Each CMU has densely interconnected neurons to model universal circuit patterns (right).

Evaluation results are promising. CircuitNet outperformed several popular neural network architectures in function approximation, reinforcement learning, image classification, and time-series prediction. It also achieved comparable or better performance than other neural networks, often with fewer parameters, demonstrating its effectiveness and strong generalization capabilities across various machine learning tasks. Our next step is to test CircuitNet’s performance on large-scale models with billions of parameters. 

Spiking neural networks: A new framework for time-series prediction

Spiking neural networks (SNNs) are emerging as a powerful type of artificial neural network, noted for their energy efficiency and potential application in fields like robotics, edge computing, and real-time processing. Unlike traditional neural networks, which process signals continuously, SNNs activate neurons only upon reaching a specific threshold, generating spikes. This approach simulates the way the brain processes information and conserves energy. However, SNNs are not strong at predicting future events based on historical data, a key function in sectors like transportation and energy.

To improve SNN’s predictive capabilities, researchers have proposed an SNN framework designed to predict trends over time, such as electricity consumption or traffic patterns. This approach utilizes the efficiency of spiking neurons in processing temporal information and synchronizes time-series data—collected at regular intervals—and SNNs. Two encoding layers transform the time-series data into spike sequences, allowing the SNNs to process them and make accurate predictions, shown in Figure 3.

Diagram illustrating a new framework for SNN-based time-series prediction. The image shows the process starting with time series input, which is encoded into spikes by a novel spike encoder. These spikes are then fed into different SNN models: (a) Spike-TCN, (b) Spike-RNN, and (c) Spike-Transformer. Finally, the learned features are input into a projection layer for prediction.
Figure 3. A new framework for SNN-based time-series prediction: Time series data is encoded into spikes using a novel spike encoder (middle, bottom). The spikes are then processed by SNN models (Spike-TCN, Spike-RNN, and Spike-Transformer) for learning (top). Finally, the learned features are fed into the projection layer for prediction (bottom-right). 

Tests show that this SNN approach is very effective for time-series prediction, often matching or outperforming traditional methods while significantly reducing energy consumption. SNNs successfully capture temporal dependencies and model time-series dynamics, offering an energy-efficient approach closely aligns with how the brain processes information. We plan to continue exploring ways to further improve SNNs based on the way the brain processes information. 

Refining SNN sequence prediction

While SNNs can help models predict future events, research has shown that its reliance on spike-based communication makes it challenging to directly apply many techniques from artificial neural networks. For example, SNNs struggle to effectively process rhythmic and periodic patterns found in natural language processing and time-series analysis. In response, researchers developed a new approach for SNNs called CPG-PE, which combines two techniques:

  1. Central pattern generators (CPGs): Neural networks in the brainstem and spinal cord that autonomously generate rhythmic patterns, controlling function like moving, breathing, and chewing 
  1. Positional encoding (PE): A process that helps artificial neural networks discern the order and relative positions of elements within a sequence 

By integrating these two techniques, CPG-PE helps SNNs discern the position and timing of signals, improving their ability to process time-based information. This process is shown in Figure 4. 

Diagram illustrating the application of CPG-PE in a SNN. It shows three main components: an input spike matrix labeled “X”, a transformation process involving positional encoding and linear transformation to produce “X’”, and the output from a spiking neuron layer labeled “X_output”. The input matrix “X” has multiple rows corresponding to different channels or neurons, each containing spikes over time steps. The transformation process maps the dimensionality from (D + 2N) to D. The spiking neuron layer takes the transformed input “X’” and produces the output spike matrix “X_output”.
Figure 4: Application of CPG-PE in an SNN. X, X′, and X-output are spike matrices. 

We evaluated CPG-PE using four real-world datasets: two covering traffic patterns, and one each for electricity consumption and solar energy. Results demonstrate that SNNs using this method significantly outperform those without positional encoding (PE), shown in Table 1. Moreover, CPG-PE can be easily integrated into any SNN designed for sequence processing, making it adaptable to a wide range of neuromorphic chips and SNN hardware.

Table showing experimental results of time-series forecasting on two datasets, Metr-la and Pems-bay, with prediction lengths of 6, 24, 48, and 96. The table compares the performance of various models, including different configurations of SNN, RNN, and Transformers. Performance metrics such as RSE and R^2 are reported. The best SNN results are highlighted in bold, and up-arrows indicate higher scores, representing better performance.
Table 1: Evaluation results of time-series forecasting on two benchmarks with prediction lengths 6, 24, 48, 96. “Metr-la” and “Pems-bay” are traffic-pattern datasets. The best SNN results are in bold. The up-arrows indicate a higher score, representing better performance. 

Ongoing AI research for greater capability, efficiency, and sustainability

The innovations highlighted in this blog demonstrate the potential to create AI that is not only more capable but also more efficient. Looking ahead, we’re excited to deepen our collaborations and continue applying insights from neuroscience to AI research, continuing our commitment to exploring ways to develop more sustainable technology.

The post Innovations in AI: Brain-inspired design for more capable and sustainable technology appeared first on Microsoft Research.

Read More

Research Focus: Week of August 26, 2024

Research Focus: Week of August 26, 2024

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft.

Decorative graphic with wavy shapes in the background in blues and purples. Text overlay in center left reads: “Research Focus: August 26, 2024”

Register now for Research Forum on September 3

Discover what’s next in the world of AI at Microsoft Research Forum (opens in new tab), an event series that explores recent research advances, bold new ideas, and important discussions with the global research community.

In Episode 4, learn about Microsoft’s research initiatives at the frontiers of multimodal AI. Discover novel models, benchmarks, and infrastructure for self-improvement, agents, weather prediction, and more.

Your one-time registration includes access to our live chat with researchers on the event day.

Episode 4 will air Tuesday, September 3 at 9:00 AM Pacific Time.

microsoft research podcast

What’s Your Story: Weishung Liu

Principal PM Manager Weishung Liu shares how a career delivering products and customer experiences aligns with her love of people and storytelling and how—despite efforts to defy the expectations that come with growing up in Silicon Valley—she landed in tech.


Can LLMs Learn by Teaching? A Preliminary Study

Teaching to improve student models (e.g., knowledge distillation) is an extensively studied methodology in large language models (LLMs). However, for humans, teaching not only improves students but also improves teachers. In a recent paper: Can LLMs Learn by Teaching? A Preliminary Study, researchers from Microsoft and external colleagues explore whether that rule also applies to LLMs. If so, this could potentially enable the models to advance and improve continuously without solely relying on human-produced data or stronger models.

In this paper, the researchers show that learning by teaching (LbT) practices can be incorporated into existing LLM training/prompting pipelines and provide noticeable improvements. They design three methods, each mimicking one of the three levels of LbT in humans: observing students’ feedback; learning from the feedback; and learning iteratively, with the goals of improving answer accuracy without training and improving the models’ inherent capability with fine-tuning. The results show that LbT is a promising paradigm to improve LLMs’ reasoning ability and outcomes on several complex tasks (e.g., mathematical reasoning, competition-level code synthesis). The key findings are: (1) LbT can induce weak-to-strong generalization—strong models can improve themselves by teaching other weak models; (2) Diversity in student models might help—teaching multiple student models could be better than teaching one student model or the teacher itself. This study also offers a roadmap for integrating more educational strategies into the learning processes of LLMs in the future. 


Arena Learning: Building a data flywheel for LLMs post-training via simulated chatbot arena

Conducting human-annotated competitions between chatbots is a highly effective approach to assessing the effectiveness of large language models (LLMs). However, this process comes with high costs and time demands, complicating the enhancement of LLMs via post-training. In a recent preprint: Arena Learning: Build Data Flywheel for LLMs Post-training via Simulated Chatbot Arena, researchers from Microsoft and external colleagues introduce an innovative offline strategy designed to simulate these arena battles. This includes a comprehensive set of instructions for simulated battles employing AI-driven annotations to assess battle outcomes, facilitating continuous improvement of the target model through both supervised fine-tuning and reinforcement learning. A crucial aspect of this approach is ensuring precise evaluations and achieving consistency between offline simulations and online competitions.

To this end, the researchers present WizardArena, a pipeline crafted to accurately predict the Elo rankings of various models using a meticulously designed offline test set. Their findings indicate that WizardArena’s predictions are closely aligned with those from the online arena. They apply this novel framework to train a model, WizardLM-β, which demonstrates significant performance enhancements across various metrics. This fully automated training and evaluation pipeline paves the way for ongoing incremental advancements in various LLMs via post-training.


MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention

Computational challenges of large language model (LLM) inference restrict their widespread deployment, especially as prompt lengths continue to increase. Due to the quadratic complexity of the attention computation, it takes 30 minutes for an 8 billion parameter LLM to process a prompt of 1 million tokens (i.e., the pre-filling stage) on a single NVIDIA A100 graphics processing unit (GPU). Existing methods for speeding up pre-filling often fail to maintain acceptable accuracy or efficiency.

In a recent preprint: MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention, researchers from Microsoft introduce a sparse calculation method designed to accelerate pre-filling of long-sequence processing. They identify three unique patterns in long-context attention matrices – the A-shape, Vertical-Slash, and Block-Sparse – that can be leveraged for efficient sparse computation on GPUs. They determine the optimal pattern for each attention head offline and dynamically build sparse indices based on the assigned pattern during inference. They then perform efficient sparse attention calculations via optimized GPU kernels to reduce latency in the pre-filling stage of long-context LLMs. The research demonstrates that MInference (million tokens inference) reduces inference latency by up to 10x for pre-filling on an A100, while maintaining accuracy.


Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs

Regular expressions (regex) are used to represent and match patterns in text documents in a variety of applications: content moderation, input validation, firewalls, clinical trials, and more. Existing use cases assume that the regex and the document are both readily available to the querier, so they can match the regex on their own with standard algorithms. But what about situations where the document is actually held by someone else who does not wish to disclose to the querier anything about the document besides the fact that it matches or does not match a particular regex? The ability to prove such facts enables interesting new applications. 

In a recent paper: Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs, researchers from Microsoft and the University of Pennsylvania present a system for generating publicly verifiable, succinct, non-interactive, zero-knowledge proofs that a committed document matches or does not match a regular expression. They describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Experimental evaluation confirms that Reef can generate proofs for documents with 32 million characters; the proofs are small and cheap to verify, taking less than one second.

Reef is built on an open-source project from Microsoft Research, Nova: High-speed recursive arguments from folding schemes, (opens in new tab) which implements earlier research work described in a paper titled Nova: Recursive Zero-Knowledge Arguments from Folding Schemes (opens in new tab) by researchers from Microsoft, Carnegie Mellon University, and New York University.  


HyperNova: Recursive arguments for customizable constraint systems

Incrementally verifiable computation (IVC) is a powerful cryptographic tool that allows its user to produce a proof of the correct execution of a “long running” computation in an incremental fashion. IVC enables a wide variety of applications in decentralized settings, including verifiable delay functions, succinct blockchains, rollups, verifiable state machines, and proofs of machine executions.

In a recent paper: HyperNova: Recursive arguments for customizable constraint systems, researchers from Microsoft and Carnegie Mellon University introduce a new recursive argument for proving incremental computations whose steps are expressed with CCS, a customizable constraint system that simultaneously generalizes Plonkish, R1CS, and AIR without overheads. HyperNova resolves four major problems in the area of recursive arguments.

First, it provides a folding scheme for CCS where the prover’s cryptographic cost is a single multiscalar multiplication (MSM) of size equal to the number of variables in the constraint system, which is optimal when using an MSM-based commitment scheme. This makes it easier to build generalizations of IVC, such as proof carrying data (PCD). Second, the cost of proving program executions on stateful machines (e.g., EVM, RISC-V) is proportional only to the size of the circuit representing the instruction invoked by the program step. Third, the researchers use a folding scheme to “randomize” IVC proofs, achieving zero-knowledge for “free” and without the need to employ zero-knowledge SNARKs. Fourth, the researchers show how to efficiently instantiate HyperNova over a cycle of elliptic curves. 


The post Research Focus: Week of August 26, 2024 appeared first on Microsoft Research.

Read More

What’s Your Story: Lex Story

What’s Your Story: Lex Story

photo of Lex Story for the What's Your Story episode of the Microsoft Research podcast

In the Microsoft Research Podcast series What’s Your Story, Johannes Gehrke explores the who behind the technical and scientific advancements helping to reshape the world. A systems expert whose 10 years with Microsoft spans research and product, Gehrke talks to members of the company’s research community about what motivates their work and how they got where they are today.

In this episode, Gehrke is joined by Lex Story, a model maker and fabricator whose craftsmanship has helped bring research to life through prototyping. He’s contributed to projects such as Jacdac, a hardware-software platform for connecting and coding electronics, and the biological monitoring and intelligence platform Premonition. Story shares how his father’s encouragement helped stoke a curiosity that has informed his pursuit of the sciences and art; how his experience with the Marine Corps intensified his desire for higher education; and how his heritage and a sabbatical in which he attended culinary school might inspire his next career move …

photos of Lex Story throughout his life

Learn about the projects Story has contributed to:

Transcript

[TEASER] [MUSIC PLAYS UNDER DIALOGUE]

LEX STORY: Research is about iteration. It’s about failing and failing fast so that you can learn from it. You know, we spin on a dime. Sometimes, we go, whoa, we went the wrong direction. But we learn from it, and it just makes us better.

JOHANNES GEHRKE: Microsoft Research works at the cutting edge. But how much do we know about the people behind the science and technology that we create? This is What’s Your Story, and I’m Johannes Gehrke. In my 10 years with Microsoft, across product and research, I’ve been continuously excited and inspired by the people I work with, and I’m curious about how they became the talented and passionate people they are today. So I sat down with some of them. Now, I’m sharing their stories with you. In this podcast series, you’ll hear from them about how they grew up, the critical choices that shaped their lives, and their advice to others looking to carve a similar path.   

[MUSIC FADES]

In this episode, I’m talking with model maker and fabricator Lex Story. His creativity and technical expertise in computer-aided industrial design and machining are on display in prototypes and hardware across Microsoft—from Jacdac, a hardware-software platform for connecting and coding electronics, to the biological monitoring and intelligence platform Microsoft Premonition. But he didn’t start out in research. Encouraged by his father, he pursued opportunities to travel, grow, and learn. This led to service in the Marine Corps; work in video game development and jewelry design; and a sabbatical to attend culinary school. He has no plans of slowing down. Here’s my conversation with Lex, beginning with hardware development at Microsoft Research and his time growing up in San Bernardino, California.


GEHRKE: Welcome, Lex.

LEX STORY: Oh, thank you.

GEHRKE: Really great to have you here. Can you tell us a little bit about what you’re doing here at MSR (Microsoft Research) …

STORY: OK.

GEHRKE: … and how did you actually end up here?

STORY: Well, um, within MSR, I actually work in the hardware prototype, hardware development. I find solutions for the researchers, especially in the areas of developing hardware through various fabrication and industrial-like methods. I’m a model maker. My background is as an industrial designer and a product designer. So when I attended school initially, it was to pursue a science; it was [to] pursue chemistry.

GEHRKE: And you grew up in California?

STORY: I grew up in California. I was born in Inglewood, California, and I grew up in San Bernardino, California. Nothing really too exciting happening in San Bernardino, which is why I was compelled to find other avenues, especially to go seek out travel. To do things that I knew that I would be able to look back and say, yes, you’ve definitely done something that was beyond what was expected of you having grown up in San Bernardino.

GEHRKE: And you even had that drive during your high school, or …

STORY: Yeah, high school just didn’t feel like … I think it was the environment that I was growing up in; it didn’t feel as if they really wanted to foster exceptional growth. And I had a father who was … had multiple degrees, and he had a lot of adversity, and he had a lot of challenges. He was an African American. He was a World War II veteran. But he had attained degrees, graduate degrees, in various disciplines, and that included chemical engineering, mechanical engineering, and electrical engineering.

GEHRKE: Wow. All three of them?

STORY: Yes. And so he was … had instilled into us that, you know, education is a vehicle, and if you want to leave this small town, this is how you do it you. But you need to be a vessel. You need to absorb as much as you can from a vast array of disciplines. And not only was he a man of science; he was also an artist. So he always fostered that in us. He said, you know, explore, gain new skills, and the collection of those skills will make you greater overall. He’s not into this idea of being such a specialist. He says lasers are great, but lasers can be blind to what’s happening around them. He says you need to be a spotlight. And he says then you have a great effect on a large—vast, vast array of things instead of just being focused on one thing.

GEHRKE: So you grew up in this environment where the idea was to, sort of, take a holistic view and not, like, a myopic view …

STORY: Yes, yes, yes.

GEHRKE: And so what is the impact of that on you?

STORY: Well, as soon as I went into [LAUGHS] the Marine Corps, I said, now I can attain my education. And …

GEHRKE: So right after school, you went to the …

STORY: I went directly into the Marine Corps right after high school graduation.

GEHRKE: And you told me many times, this is not the Army, right?

STORY: No, it’s the Marine Corps. It’s a big differentiation between … they’re both in military service. However, the Marine Corps is very proud of its traditions, and they instill that in us during boot camp, your indoctrination. It is drilled upon you that you are not just an arm of the military might. You are a professional. You are representative. You will basically become a reflection of all these other Marines who came before you, and you will serve as a point of the young Marines who come after you. So that was drilled into us from day one of boot camp. It was … but it was very grueling. You know, that was the one aspect, and there was a physical aspect. And the Marine Corps boot camp is the longest of all the boot camps. It was, for me, it was 12 weeks of intensive, you know, training. So, you know, the indoctrination is very deep.

GEHRKE: And then so it’s your high school, and you, sort of, have this holistic thinking that you want to bring in.

STORY: Yes.

GEHRKE: And then you go to the Marines.

STORY: I go to the Marines. And the funny thing is that I finished my enlistment, and after my enlistment, I enroll in college, and I say, OK, great; that part of my … phase of life is over. However, I’m still active reserve, and the Desert Shield comes up. So I’m called back, and I said, OK, well, I can come back. I served in a role as an NBC instructor. “NBC” stands for nuclear, biological, chemical warfare. And one of the other roles that I had in the Marine Corps, I was also a nuke tech. That means I knew how to deploy artillery-delivered nuclear-capable warheads. So I had this very technical background mixed in with, like, this military, kind of, decorum. And so I served in Desert Shield, and then eventually that evolved into Operation Desert Storm, and once that was over, I was finally able to go back and actually finish my schooling.

GEHRKE: Mm-hmm. So you studied for a couple of years and then you served?

STORY: Oh, yes, yes.

GEHRKE: OK. OK.

STORY: I had done a four-year enlistment, and you have a period of years after your enlistment where you can be recalled, and it would take very little time for you to get wrapped up for training again to be operational.

GEHRKE: Well, that must be a big disruption right in the middle of your studies, and …

STORY: It was a disruption that …

GEHRKE: And thank you for your service.

STORY: [LAUGHS] Thank you. I appreciate that. It was a disruption, but it was a welcome disruption because, um, it was a job that I knew that I could do well. So I was willing to do it. And when I was ready for college again, it made me a little hungrier for it.

GEHRKE: And you’re already a little bit more mature than the average college student …

STORY: Oh, yes.

GEHRKE: … when you entered, and then now you’re coming back from your, sort of, second time.

STORY: I think it was very important for me to actually have that military experience because [through] that military experience, I had matured. And by the time I was attending college, I wasn’t approaching it as somebody who was in, you know, their teenage years and it’s still formative; you’re still trying to determine who you are as a person. The military had definitely shown me, you know, who I was as a person, and I actually had a few, you know, instances where I actually saw some very horrible things. If anything, being in a war zone during war time, it made me a pacifist, and I have … it increased my empathy. So if anything, there was a benefit from it. I saw some very horrible things, and I saw some amazing things come from human beings on both ends of the spectrum.

GEHRKE: And it’s probably something that’s influenced the rest of your life also in terms of where you went as your career, right?

STORY: Yes.

GEHRKE: So what were you studying, and then what were your next steps?

STORY: Well, I was studying chemistry.

GEHRKE: OK, so not only chemistry and mechanical engineering …

STORY: And then I went away, off to Desert Storm, and when I came back, I decided I didn’t want to study chemistry anymore. I was very interested in industrial design and graphic design, and as I was attending, at ArtCenter College of Design in Pasadena, California, there was this new discipline starting up, but it was only for graduates. It was a graduate program, and it was called computer-aided industrial design. And I said, wait a minute, what am I doing? This is something that I definitely want to do. So it was, like, right at the beginning of computer-generated imagery, and I had known about CAD in a very, very rudimentary form. My father, luckily, had introduced me to computers, so as I was growing up a child in the ’70s and the ’80s, we had computers in our home because my dad was actually building them. So his background and expertise—he was working for RCA; he was working for Northrop Grumman. So I was very familiar with those. 

GEHRKE: You built PCs at home, or what, what … ?

STORY: Oh, he built PCs. I learned to program. So I … 

GEHRKE: What was your first programming language?

STORY: Oh, it was BASIC …

GEHRKE: BASIC. OK, yup.

STORY: … of course. It was the only thing I could check out in the library that I could get up and running on. So I was surrounded with technology. While most kids went away, summer camp, I spent my summer in the garage with my father. He had metalworking equipment. I understood how to operate metal lathes. I learned how to weld. I learned how to rebuild internal combustion engines. So my childhood was very different from what most children had experienced during their summer break. And also at that time, he was working as a … in chemistry. So his job then, I would go with him and visit his job and watch him work in a lab environment. So it was very, very unique. But also the benefit of that is that being in a lab environment was connected to other sciences. So I got to see other departments. I got to see the geology department. I got to see … there was disease control in the same department that he was in. So I was exposed to all these things. So I was always very hungry and interested, and I was very familiar with sciences. So looking at going back into art school, I said, oh, I’m going to be an industrial designer, and I dabble in art. And I said, wait a minute. I can use technology, and I can create things, and I can guide machines. And that’s the CAM part, computer-aided machining. So I was very interested in that. And then having all of this computer-generated imagery knowledge, I did one of the most knuckleheaded things I could think of, and I went into video game development.

GEHRKE: Why is it knuckleheaded? I mean, it’s probably just the start of big video games.

STORY: Well, I mean … it wasn’t, it wasn’t a science anymore. It was just pursuit of art. And while I was working in video game development, it was fun. I mean, no doubt about it. And that’s how I eventually came to Microsoft, is the company I was working for was bought, purchased by Microsoft.

GEHRKE: But why is it only an art? I’m so curious about this because even computer games, right, there’s probably a lot of science about A/B testing, science of the infrastructure …

STORY: Because I was creating things strictly for the aesthetics.

GEHRKE: I see.

STORY: And I had the struggle in the back of my mind. It’s, like, why don’t we try to create things so that they’re believable, and there’s a break you have to make, and you have to say, is this entertaining? Because in the end, it’s entertainment. And I’ve always had a problem with that.

GEHRKE: It’s about storytelling though, right?

STORY: Yes, it is about storytelling. And that was one of the things that was always told to us: you’re storytellers. But eventually, it wasn’t practical, and I wanted to be impactful, and I couldn’t be impactful doing that. I could entertain you. Yeah, that’s great. It can add some levity to your life. But I was hungry for other things, so I took other jobs eventually. I thought I was going to have a full career with it, and I decided, no, this is not the time to do it.

GEHRKE: That’s a big decision though, right?

STORY: Oh, yeah. Yeah.

GEHRKE: Because, you know, you had a good job at a game company, and then you decided to …

STORY: But there was no, there was no real problem solving for me.

GEHRKE: I see. Mm-hmm.

STORY: And there was opportunity where there was a company, and they were using CAD, and they were running wax printers, and it was a jewel company. And I said, I can do jewelry.

GEHRKE: So what is a wax printer? Explain that.

STORY: Well, here’s … the idea is you can do investment casting.

GEHRKE: Yeah.

STORY: So if you’re creating all your jewelry with CAD, then you can be a jewelry designer and you can have something practical. The reason I took those jobs is because I wanted to learn more about metallurgy and metal casting. And I did that for a bit. And then, eventually, I—because of my computer-generated imagery background—I was able to find a gig with HoloLens. And so as I was working with HoloLens, I kept hearing about research, and they were like, oh yeah, look at this technology research created, and I go, where’s this research department? So I had entertained all these thoughts that maybe I should go and see if I can seek these guys out. And I did find them eventually. My previous manager, Patrick Therien, he brought me in, and I had an interview with him, and he asked me some really poignant questions. And he was a mechanical engineer by background. And I said, I really want to work here, and I need to show you that I can do the work. And he says, you don’t need to prove to me that you can do the work; you have to prove to me that you’re willing to figure it out.

GEHRKE: So how did you do that, or how did you show him?

STORY: I showed him a few examples. I came up with a couple of ideas, and then I demonstrated some solutions, and I was able to present those things to him during the interview. And so I came in as a vendor, and I said, well, if I apply myself, you know, rigorously enough, they’ll see the value in it. And, luckily, I caught the eye of …was it … Gavin [Jancke], and it was Patrick. And they all vouched for me, and they said, yeah, definitely, I have something that I can bring. And it’s always a challenge. The projects that come in, sometimes we don’t know what the solution is going to be, and we have to spend a lot of time thinking about how we’re going to approach it. And we also have to be able to approach it within the scope of what their project entails. They’re trying to prove a concept. They’re trying to publish. I want to make everything look like a car, a beautiful, svelte European designed … but that’s not always what’s asked. So I do have certain parameters I have to stay within, and it’s exciting, you know, to come up with these solutions. I’m generating a concept that in the end becomes a physical manifestation.

GEHRKE: Yeah, so how do you balance this? Because, I mean, from, you know, just listening to your story so far, which is really fascinating, is that there’s always this balance not only on the engineering side but also on the design and art side.

STORY: Yes!

GEHRKE: And then a researcher comes to you and says, I want x.

STORY: Yes, yes, yes. [LAUGHS]

GEHRKE: So how do you, how do you balance that?

STORY: It’s understanding my roles and responsibilities.

GEHRKE: OK.

STORY: It’s a tough conversation. It’s a conversation that I have often with my manager. Because in the end, I’m providing a service, and there are other outlets for me still. Every day, I draw. I have an exercise of drawing where I sit down for at least 45 minutes every day, and I put pen to paper because that is an outlet. I’m a voracious reader. I tackle things because—on a whim. It’s not necessarily that I’m going to become a master of it. So that’s why I attended culinary school. Culinary school fell into this whole curiosity with molecular gastronomy. And I said, wait a minute, I don’t want to be an old man …

GEHRKE: So culinary school is like really very, very in-depth understanding the chemistry of cooking. I mean, the way you understand it …

STORY: Yeah, the molecular gastronomy, the chemistry of cooking. Why does this happen? What is caramelization? What’s the Maillard effect?

GEHRKE: So it’s not about just the recipe for this cake, or so …

STORY: No … the one thing you learn in culinary school very quickly is recipes are inconsequential.

GEHRKE: Oh, really?

STORY: It’s technique.

GEHRKE: OK.

STORY: Because if I have a technique and I know what a roux is and what a roux is doing—and a roux is actually gelatinizing another liquid; it’s a carrier. Once you know these techniques and you can build on those techniques, recipes are irrelevant. Now, the only time recipes matter is when you’re dealing with specific ratios, but that’s still chemistry, and that’s only in baking. But everything else is all technique. I know how to break down the, you know, the connective tissue of a difficult cut of meat. I know what caramelization adds. I understand things like umami. So I look at things in a very, very different way than most people. I’m not like a casual cook, which drove me to go work for Cook’s Illustrated and America’s Test Kitchen, outside of Boston. Because it wasn’t so much about working in a kitchen; it was about exploration, a process. That all falls back into that maddening, you know, part of my personality … it’s like, what is the process? How can I improve that—how can I harness that process?

GEHRKE: So how was it to work there? Because I see food again as, sort of, this beautiful combination of engineering in some sense, creating the recipe. But then there’s also the art of it, right? The presentation.

STORY: Yes …

GEHRKE: And how do you actually put the different flavors together?

STORY: Well, a lot of that’s familiarity because it’s like chemistry. You have familiarity with reactions; you have familiarity and comparisons. So that all falls back into the science. Of course, when I plate it, that falls … I’m now borrowing on my aesthetics, my ability to create aesthetic things. So it fulfills all of those things. So, and that’s why I said, I don’t want to be an old man and say, oh, I wish I’d learned this. I wanted to attend school. I took a sabbatical, attended culinary school.

GEHRKE: So you took a sabbatical from Microsoft?

STORY: Oh, yes, when I was working in video games. Yeah.

GEHRKE: OK.

STORY: I took a sabbatical. I did that. And I was like, great. I got that out of the way. Who’s to say I don’t open a food truck?

GEHRKE: Yeah, I was just wondering, what else is on your bucket list, you know?

STORY: [LAUGHS] I definitely want to do the food truck eventually.

GEHRKE: OK, what would the food truck be about?

STORY: OK. My heritage, my background, is that I’m half Filipino and half French Creole Black.

GEHRKE: You also had a huge family. There’s probably a lot of really good cooking.

STORY: Oh, yeah. Well, I have stepbrothers and stepsisters from my Mexican stepmother, and she grew up cooking Mexican dishes. She was from the Sinaloa area of Mexico. And so I learned a lot of those things, very, very unique regional things that were from her area that you can’t find anywhere else.

GEHRKE: What’s an example? Now you’ve made me curious.

STORY: Capirotada. Capirotada is a Mexican bread pudding, and it utilizes a lot of very common techniques, but the ingredients are very specific to that region. So the preparation is very different. And I’ve had a lot of people actually come to me and say, I’ve never had capirotada like that. And then I have other people who say, that is exactly the way I had it. And by the way, my, you know, my family member was from the Sinaloa area. So, yeah, but from my Filipino heritage background, I would love to do something that is a fusion of Filipino foods. There’s a lot of great, great food like longganisa; there’s a pancit. There’s adobo. That’s actually adding vinegars to braised meats and getting really great results that way. It’s just a … but there’s a whole bevy of … but my idea eventually for a food truck, I’m going to keep that under wraps for now until I finally reveal it. Who’s, who’s to say when it happens.

GEHRKE: OK. Wow, that sounds super interesting. And so you bring all of these elements back into your job here at MSR in a way, because you’re saying, well, you have these different outlets for your art. But then you come here, and … what are some of the things that you’ve created over the last few years that you’re especially proud of?

STORY: Oh, phew … that would … Project Eclipse.

GEHRKE: Eclipse, uh-huh.

STORY: That’s the hyperlocal air-quality sensor.

GEHRKE: And this is actually something that was really deployed in cities …

STORY: Yes. It was deployed in Chicago …

GEHRKE: … so it had to be both aesthetically good and … to look nice, not only functional.

STORY: Well, it had not only … it had … first of all, I approached it from it has to be functional. But knowing that it was going to deploy, I had to design everything with a design for manufacturing method. So DFM—design for manufacturing—is from the ground up, I have to make sure that there are certain features as part of the design, and that is making sure I have draft angles because the idea is that eventually this is going to be a plastic-injected part.

GEHRKE: What is a draft angle?

STORY: A draft angle is so that a part can get pulled from a mold.

GEHRKE: OK …

STORY: If I build things with pure vertical walls, there’s too much even stress that the part will not actually extract from the mold. Every time you look at something that’s plastic injected, there’s something called the draft angle, where there’s actually a slight taper. It’s only 2 to 4 degrees, but it’s in there, and it needs to be in there; otherwise, you’re never going to get the part out of the mold. So I had to keep that in mind. So from the ground up, I had designed this thing—the end goal of this thing is for it to be reproduced in a production capacity. And so DFM was from day one. They came to me earlier, and they gave me a couple of parts that they had prototyped on a 3D printer. So I had to go through and actually re-engineer the entire design so that it would be able to hold the components, but …

GEHRKE: And to be waterproof and so on, right?

STORY: Well, waterproofing, that was another thing. We had a lot of iterations—and that was the other thing about research. Research is about iteration. It’s about failing and failing fast so that you can learn from it. Failure is not a four-lettered word. In research, we fail so that we use that as a steppingstone so that we can make discoveries and then succeed on that …

GEHRKE: We learn.

STORY: Yes, it’s a learning opportunity. As a matter of fact, the very first time we fail, I go to the whiteboard and write “FAIL” in big capital letters. It’s our very first one, and it’s our “First Attempt In Learning.” And that’s what I remember it as. It’s my big acronym. But it’s a great process. You know, we spin on a dime. Sometimes, we go, whoa, we went the wrong direction. But we learn from it, and it just makes us better.

GEHRKE: And sometimes you have to work under time pressure because, you know, there’s no …

STORY: There isn’t a single thing we don’t do in the world that isn’t under time pressure. Working in a restaurant … when I had to, as they say, grow my bones after culinary school, you work in a restaurant, and you gain that experience. And one of the …

GEHRKE: So in your sabbatical, you didn’t only go to culinary school; you actually worked in this restaurant, as well?

STORY: Oh, it’s required.

GEHRKE: It’s a requirement? OK.

STORY: Yeah, yeah, it’s a requirement that you understand, you familiarize yourself with the rigor. So one of the things we used to do is … there was a Denny’s next to LAX in Los Angeles. Because I was attending school in Pasadena. And I would go and sign up to be the fry cook at a Denny’s that doesn’t close. It’s 24 hours.

GEHRKE: Yup …

STORY: And these people would come in, these taxis would come in, and they need to eat, and they need to get in and out.

GEHRKE: As a student, I would go to Denny’s at absurd times …

STORY: Oh, my, it was like drinking from a fire hose. I was getting crushed every night. But after a while, you know, within two or three weeks, I was like a machine, you know. And it was just like, oh, that’s not a problem. Oh, I have five orders here of this. And I need to make sure those are separated from these orders. And you have this entire process, this organization that happens in the back of your mind, you know. And that’s part of it. I mean, every job I’ve ever had, there’s always going to be a time pressure.

GEHRKE: But it must be even more difficult in research because you’re not building like, you know, Denny’s, I think you can fry probably five or 10 different things. Whereas here, you know, everything is unique, and everything is different. And then you, you know, you learn and improve and fail.

STORY: Yes, yes. But, I mean, it’s … but it’s the same as dealing with customers. Everyone’s going to have a different need and a different … there’s something that everyone’s bringing unique to the table. And when I was working at Denny’s, you’re going to have the one person that’s going to make sure that, oh, they want something very, very specific on their order. It’s no different than I’m working with, you know, somebody I’m offering a service to in a research environment.

GEHRKE: Mm-hmm. Mm-hmm. That’s true. I hadn’t even thought about this. Next time when I go to a restaurant, I’ll be very careful with the special orders. [LAUGHTER]

STORY: That’s why I’m exceptionally kind to those people who work in restaurants because I’ve been on the other side of the line.

GEHRKE: So, so you have seen many sides, right? And you especially are working also across developers, PMs, researchers. How do you bridge all of these different gaps? Because all of these different disciplines come with a different history and different expectations, and you work across all of them.

STORY: There was something somebody said to me years ago, and he says, never be the smartest guy in the room. Because at that point, you stop learning. And I was very lucky enough to work with great people like Mike Sinclair, Bill Buxton—visionaries. And one of the things that was always impressed upon me was they really let you shine, and they stepped back, and then when you had your chance to shine, they would celebrate you. And when it was their time to shine, you step back and make sure that they overshined. So it’s being extremely receptive to every idea. There’s nothing, there’s no … what do they say? The only bad idea is the lack of …

GEHRKE: Not having any ideas …

STORY: … having any ideas.

GEHRKE: Right, right …

STORY: Yeah. So being extremely flexible, receptive, willing to try things that even though they are uncomfortable, that’s I think where people find the most success.

GEHRKE: That’s such great advice. Reflecting back on your super-interesting career and all the different things that you’ve seen and also always stretching the boundaries, what’s your advice for anybody to have a great career if somebody’s starting out or is even changing jobs?

STORY: Gee, that’s a tough one. Starting out or changing—I can tell you about how to change jobs. Changing jobs … strip yourself of your ego. Be willing to be the infant, but also be willing to know when you’re wrong, and be willing to have your mind changed. That’s about it.

GEHRKE: Such, such great advice.

STORY: Yeah.

GEHRKE: Thanks so much, Lex, for the great, great conversation.

STORY: Not a problem. You’re welcome.

[MUSIC]

To learn more about Lex and to see pictures of him as a child or from his time in the Marines, visit aka.ms/ResearcherStories.

[MUSIC FADES]

The post What’s Your Story: Lex Story appeared first on Microsoft Research.

Read More

Abstracts: August 15, 2024

Abstracts: August 15, 2024

Microsoft Research Podcast - Abstracts

Members of the research community at Microsoft work continuously to advance their respective fields. Abstracts brings its audience to the cutting edge with them through short, compelling conversations about new and noteworthy achievements.

In this episode, Microsoft Product Manager Shrey Jain and OpenAI Research Scientist Zoë Hitzig join host Amber Tingle to discuss “Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online.” In their paper, Jain, Hitzig, and their coauthors describe how malicious actors can draw on increasingly advanced AI tools to carry out deception, making online deception harder to detect and more harmful. Bringing ideas from cryptography into AI policy conversations, they identify a possible mitigation: a credential that allows its holder to prove they’re a person––not a bot––without sharing any identifying information. This exploratory research reflects a broad range of collaborators from across industry, academia, and the civil sector specializing in areas such as security, digital identity, advocacy, and policy.

Transcript

[MUSIC]

AMBER TINGLE: Welcome to Abstracts, a Microsoft Research Podcast that puts the spotlight on world-class research—in brief. I’m Amber Tingle. In this series, members of the research community at Microsoft give us a quick snapshot—or a podcast abstract—of their new and noteworthy papers.

[MUSIC FADES]

Our guests today are Shrey Jain and Zoë Hitzig. Shrey is a product manager at Microsoft, and Zoë is a research scientist at OpenAI. They are two of the corresponding authors on a new paper, “Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online.” This exploratory research comprises multidisciplinary collaborators from across industry, academia, and the civil sector. The paper is available now on arXiv. Shrey and Zoë, thank you so much for joining us, and welcome back to the Microsoft Research Podcast.


SHREY JAIN: Thank you. We’re happy to be back.

ZOË HITZIG: Thanks so much.

TINGLE: Shrey, let’s start with a brief overview of your paper. Why is this research important, and why do you think this is something we should all know about?

JAIN: Malicious actors have been exploiting anonymity as a way to deceive others online. And historically, deception has been viewed as this unfortunate but necessary cost as a way to preserve the internet’s commitment to privacy and unrestricted access to information. And today, AI is changing the way we should think about malicious actors’ ability to be successful in those attacks. It makes it easier to create content that is indistinguishable from human-created content, and it is possible to do so in a way that is only getting cheaper and more accessible. And so this paper aims to address a countermeasure to protect against AI-powered deception at scale while also protecting privacy. And I think the reason why people should care about this problem is for two reasons. One is it can very soon become very logistically annoying to deal with these various different types of scams that can occur. I think we’ve all been susceptible to different types of attacks or scams that, you know, people have had. But now these scams are going to become much more persuasive and effective. And so for various different recovery purposes, it can become very challenging to get access back to your accounts or rebuild your reputation that someone may damage online. But more importantly, there’s also very dangerous things that can happen. Kids might not be safe online anymore. Or our ability to communicate online for democratic processes. A lot of the way in which we shape political views today happens online. And that’s also at risk. And in response to that, we propose in this paper a solution titled personhood credentials. Personhood credentials enable people to prove that they are in fact a real person without revealing anything more about themselves online.

TINGLE: Zoë, walk us through what’s already been done in this field, and what’s your unique contribution to the literature here?

HITZIG: I see us as intervening on two separate bodies of work. And part of what we’re doing in this paper is bringing together those two bodies of work. There’s been absolutely amazing work for decades in cryptography and in security. And what cryptographers have been able to do is to figure out protocols that allow people to prove very specific claims about themselves without revealing their full identity. So when you think about walking into a bar and the bartender asks you to prove that you’re over 21—or over 18, depending on where you are—you typically have to show your full driver’s license. And now that’s revealing a lot of information. It says, you know, where you live, whether you’re an organ donor. It’s revealing a lot of information to that bartender. And online, we don’t know what different service providers are storing about us. So, you know, the bartender might not really care where we live or whether we’re an organ donor. But when we’re signing up for digital services and we have to show a highly revealing credential like a driver’s license just to get access to something, we’re giving over too much information in some sense. And so this one body of literature that we’re really drawing on is a literature in cryptography. The idea that I was talking about there, where you can prove privately just isolated claims about yourself, that’s an idea called an anonymous credential. It allows you to be anonymous with respect to some kind of service provider while still proving a limited claim about yourself, like “I am over 18,” or in the case of personhood credentials, you prove, “I am a person.” So that’s all one body of literature. Then there’s this huge other body of literature and set of conversations happening in policy circles right now around what to do about AI. Huge questions abounding. Shrey and I have written a prior paper called “Contextual Confidence and Generative AI,” which we talked about on this podcast, as well, and in that paper, we offered a framework for thinking about the specific ways that generative AI, sort of, threatens the foundations of our modes of communication online. And we outlined about 16 different solutions that could help us to solve the coming problems that generative AI might bring to our online ecosystems. And what we decided to do in this paper was focus on a set of solutions that we thought are not getting enough attention in those AI and AI policy circles. And so part of what this paper is doing is bringing together these ideas from this long body of work in cryptography into those conversations.

TINGLE: I’d like to know more about your methodology, Shrey. How did your team go about conducting this research?

JAIN: So we had a wide range of collaborators from industry, academia, the civil sector who work on topics of digital identity, privacy, advocacy, security, and AI policy which came together to think about, what is the clearest way in which we can explain what we believe is a countermeasure that can protect against AI-powered deception that, from a technological point of view, there’s already a large body of work that we can reference but from a “how this can be implemented.” Discussing the tradeoffs that various different types of academics and industry leaders are thinking about. Can we communicate that very clearly? And so the methodology here was really about bringing together a wide range of collaborators to really bridge these two bodies of work together and communicate it clearly—not just the technical solutions but also the tradeoffs.

TINGLE: So, Zoë, what are the major findings here, and how are they presented in the paper?

HITZIG: I am an economist by training. Economists love to talk about tradeoffs. You know, when you have some of this, it means you have a little bit less of that. It’s kind of like the whole business of economics. And a key finding of the paper, as I see it, is that we begin with what feels like a tradeoff, which is on the one hand, as Shrey was saying, we want to be able to be anonymous online because that has great benefits. It means we can speak truth to power. It means we can protect civil liberties and invite everyone into online spaces. You know, privacy is a core feature of the internet. And at the same time, the, kind of, other side of the tradeoff that we’re often presented is, well, if you want all that privacy and anonymity, it means that you can’t have accountability. There’s no way of tracking down the bad actors and making sure that they don’t do something bad again. And we’re presented with this tradeoff between anonymity on the one hand and accountability on the other hand. All that is to say, a key finding of this paper, as I see it, is that personhood credentials and more generally this class of anonymous credentials that allow you to prove different pieces of your identity online without revealing your entire identity actually allow you to evade the tradeoff and allow you to, in some sense, have your cake and eat it, too. What it allows us to do is to create some accountability, to put back some way of tracing people’s digital activities to an accountable entity. What we also present in the paper are a number of different, sort of, key challenges that will have to be taken into account in building any kind of system like this. But we present all of that, all of those challenges going forward, as potentially very worth grappling with because of the potential for this, sort of, idea to allow us to preserve the internet’s commitment to privacy, free speech, and anonymity while also creating accountability for harm.

TINGLE: So Zoë mentioned some of these tradeoffs. Let’s talk a little bit more about real-world impact, Shrey. Who benefits most from this work?

JAIN: I think there’s many different people that benefit. One is anyone who’s communicating or doing anything online in that they can have more confidence in their interactions. And it, kind of, builds back on the paper that Zoë and I wrote last year on contextual confidence and generative AI, which is that we want to have confidence in our interactions, and in order to do that, one component is being able to identify who you’re speaking with and also doing it in a privacy-preserving way. And I think another person who benefits is policymakers. I think today, when we think about the language and technologies that are being promoted, this complements a lot of the existing work that’s being done on provenance and watermarking. And I think the ability for those individuals to be successful in their mission, which is creating a safer online space, this work can help guide these individuals to be more effective in their mission in that it highlights a technology that is not currently as discussed comparatively to these other solutions and complements them in order to protect online communication.

HITZIG: You know, social media is flooded with bots, and sometimes the problem with bots is that they’re posting fake content, but other times, the problem with bots is that there are just so many of them and they’re all retweeting each other and it’s very hard to tell what’s real. And so what a personhood credential can do is say, you know, maybe each person is only allowed to have five accounts on a particular social media platform.

TINGLE: So, Shrey, what’s next on your research agenda? Are there lingering questions—I know there are—and key challenges here, and if so, how do you hope to answer them?

JAIN: We believe we’ve aggregated a strong set of industry, academic, and, you know, civil sector collaborators, but we’re only a small subset of the people who are going to be interacting with these systems. And so the first area of next steps is to gather feedback about the proposal of a solution that we’ve had and how can we improve that: are there tradeoffs that we’re missing? Are there technical components that we weren’t thinking as deeply through? And I think there’s a lot of narrow open questions that come out of this. For instance, how do personhood credentials relate to existing laws regarding identity theft or protection laws? In areas where service providers can’t require government IDs, how does that apply to personhood credentials that rely on government IDs? I think that there’s a lot of these open questions that we address in the paper that I think need more experimentation and thinking through but also a lot of empirical work to be done. How do people react to personhood credentials, and does it actually enhance confidence in their interactions online? I think that there’s a lot of open questions on the actual effectiveness of these tools. And so I think there’s a large area of work to be done there, as well.

HITZIG: I’ve been thinking a lot about the early days of the internet. I wasn’t around for that, but I know that every little decision that was made in a very short period of time had incredibly lasting consequences that we’re still dealing with now. There’s an enormous path dependence in every kind of technology. And I feel that right now, we’re in that period of time, the small window where generative AI is this new thing to contend with, and it’s uprooting many of our assumptions about how our systems can work or should work. And I’m trying to think about how to set up those institutions, make these tiny decisions right so that in the future we have a digital architecture that’s really serving the goals that we want it to serve.

[MUSIC]

TINGLE: Very thoughtful. With that, Shrey Jain, Zoë Hitzig, thank you so much for joining us today.

HITZIG: Thank you so much, Amber.

TINGLE: And thanks to our listeners, as well. If you’d like to learn more about Shrey and Zoë’s work on personhood credentials and advanced AI, you’ll find a link to this paper at aka.ms/abstracts, or you can read it on arXiv. Thanks again for tuning in. I’m Amber Tingle, and we hope you’ll join us next time on Abstracts.

[MUSIC FADES]

The post Abstracts: August 15, 2024 appeared first on Microsoft Research.

Read More

Research Focus: Week of August 12, 2024

Research Focus: Week of August 12, 2024

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft.

Research Focus: August 5, 2024

Register now for Research Forum on September 3

Discover what’s next in the world of AI at Microsoft Research Forum (opens in new tab), an event series that explores recent research advances, bold new ideas, and important discussions with the global research community. 

In Episode 4, you’ll learn about the latest multimodal AI models, advanced benchmarks for AI evaluation and model self-improvement, and an entirely new kind of computer for AI inference and hard optimization. Discover how these research breakthroughs and more can help advance everything from weather prediction to materials design.

Your one-time registration includes access to our live chat with researchers on the event day and additional resources to dive into the research.

Episode 4 will air Tuesday, September 3 at 9:00 AM Pacific Time.

Microsoft research podcast

Ideas: Designing AI for people with Abigail Sellen

Social scientist and HCI expert Abigail Sellen explores the critical understanding needed to build human-centric AI through the lens of the new AICE initiative, a collective of interdisciplinary researchers studying AI impact on human cognition and the economy.


Towards Effective AI Support for Developers: A Survey of Desires and Concerns

Talking to customers provides important insights into their challenges as well as what they love. This helps identify innovative and creative ways of solving problems (without creating new ones) and guards against ruining workflows that customers actually like. However, many AI-related development tools are currently being built without consulting developers. 

In a recent paper: Towards Effective AI Support for Developers: A Survey of Desires and Concerns, researchers from Microsoft explore developers’ perspectives on AI integration in their workflows. This study reveals developers’ top desires for AI assistance along with their major concerns. The findings of this comprehensive survey among 791 Microsoft developers help the researchers identify key areas where AI can enhance productivity and how to address developers’ concerns. The findings provide actionable insights for product teams and leaders to create AI tools that truly support developers’ needs.


SuperBench: Improving Cloud AI Infrastructure Reliability with Proactive Validation

Cloud service providers have used geographical redundancies in hardware to ensure availability of their cloud infrastructure for years. However, for AI workloads, these redundancies can inadvertently lead to hidden degradation, also known as “gray failure.” This can reduce end-to-end performance and conceal performance issues, which complicates root cause analysis for failures and regressions.

In a recent paper: SuperBench: Improving Cloud AI Infrastructure Reliability with Proactive Validation (opens in new tab), Microsoft researchers and Azure cloud engineers introduce a proactive validation system specifically for AI infrastructure that mitigates hidden degradation caused by hardware redundancies . The paper, which won a “best paper” award at USENIX ATC (opens in new tab), outlines SuperBench’s comprehensive benchmark suite, capable of evaluating individual hardware components and representing most real AI workloads. It includes a validator, which learns benchmark criteria to clearly pinpoint defective components, and a selector, which balances validation time and issue-related penalties, enabling optimal timing for validation execution with a tailored subset of benchmarks. Testbed evaluation and simulation show SuperBench can increase the mean time between incidents by up to 22.61x. SuperBench has been successfully deployed in Azure production, validating hundreds of thousands of GPUs over the last two years.


Virtual Voices: Exploring Individual Differences in Written and Verbal Participation in Meeting

A key component of team performance is participation among group members. Workplace meetings provide a common stage for such participation. But with the shift to remote work, many meetings are conducted virtually. In such meetings, chat offers an alternate avenue of participation, in which attendees can synchronously contribute to the conversation through writing.

In a recent paper: Virtual Voices: Exploring Individual Differences in Written and Verbal Participation in Meetings (opens in new tab), researchers from Microsoft and external colleagues explore factors influencing participation in virtual meetings, drawing on individual differences (status characteristics theory), psychological safety perceptions, and group communication. Results of the paper, published in the Journal of Vocational Behavior (opens in new tab), reveal gender (self-identified) and job level nuances. Women engaged more in chat, while men verbally participated more frequently, as measured using meeting telemetry. Further, men highest in job level verbally contributed the most in virtual meetings, whereas women highest in job level use the chat the most frequently. Regarding type of chats sent, women use emoji reactions more often than men, and men send more attachments than women. Additionally, results revealed psychological safety moderated the relationship between job level and overall chat participation, such that employees low in job level with high perceptions of psychological safety sent more chats than their counterparts. This study provides insights into communication patterns and the impact of psychological safety on participation in technology-mediated spaces. 


The post Research Focus: Week of August 12, 2024 appeared first on Microsoft Research.

Read More

Collaborators: AI and the economy with Brendan Lucier and Mert Demirer

Collaborators: AI and the economy with Brendan Lucier and Mert Demirer

Headshots of Brendan Lucier and Mert Demirer for the Microsoft Research Podcast

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with. 

What can the breakdown of jobs into their specific tasks tell us about the long-term impact of AI on the economy? Microsoft Senior Principal Researcher Brendan Lucier and MIT Assistant Professor Mert Demirer are combining their expertise in micro- and macroeconomics, respectively, to build a framework for answering the question and ultimately helping the world prepare for and responsibly steer the course of disruption accompanying the technology. In this episode, they share how their work fits into the Microsoft research initiative AI, Cognition, and the Economy, or AICE; how the evolution of the internet may indicate the best is yet to come for AI; and their advice for budding AI researchers.

Transcript 

[TEASER] 

[MUSIC PLAYS UNDER DIALOGUE] 

BRENDAN LUCIER: What we’re doing here is a prediction problem. And when we were trying to look into the future this way, one way we do that is we try to get as much information as we can about where we are right now. And so we were lucky to have, like, a ton of information about the current state of the economy and the labor market and some short-term indicators on how generative AI seems to be, sort of, affecting things right now, in this moment. And then the idea is to layer some theory models on top of that to try to extrapolate forward, right, in terms of what might be happening, sort of get a glimpse of this future point. 

MERT DEMIRER: So this is a prediction problem that we cannot use machine learning, AI. Otherwise, it would have been a very easy problem to solve. So what you need instead is a model or, like, framework that will take, for example, inputs of the productivity gains or for, like, microfoundation as an input and then generate predictions for the entire economy. 


[TEASER ENDS] 

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC FADES] 

On today’s episode, I’m talking to Dr. Brendan Lucier, a senior principal researcher in the economics and computation group at Microsoft Research, and Dr. Mert Demirer, an assistant professor of applied economics at the MIT Sloan School of Management. Brendan and Mert are exploring the economic impact of job automation and generative AI as part of Microsoft’s AI, Cognition, and the Economy, or AICE, research initiative. And since they’re part of the AICE Accelerator Pilot collaborations, let’s get to know our collaborators. Brendan, let’s start with you and your “business address,” if you will. Your research lives at the intersection of microeconomic theory and theoretical computer science. So tell us what people—shall we call them theorists?—who live there do and why they do it! 

BRENDAN LUCIER: Thank you so much for having me. Yeah, so this is a very interdisciplinary area of research that really gets at, sort of, this intersection of computation and economics. And what it does is it combines the ideas from algorithm design and computational complexity that we think of when we’re building algorithmic systems with, sort of, the microeconomic theory of how humans will use those systems and how individuals make decisions, right. How their goals inform their actions and how they interact with each other. And where this really comes into play is in the digital economy and platforms that we, sort of, see online that we work with on an everyday basis, right. So we’re increasingly interacting with algorithms as part of our day-to-day life. So we use them to search for information; we use them to find rides and find jobs and have recommendations on what products we purchase. And as we do these things online, you know, some of the algorithms that go into this, like, help them grow into these huge-scale, you know, internet-sized global platforms. But fundamentally, these are still markets, right. So even though there’s a lot of algorithms and a lot of computational ideas that go into these, really what they’re doing is connecting human users to the goods and the services and to each other over the course of what they need to do in their day-to-day life, right. And so this is where this microeconomic view really comes into play. So what we know is that when people are interacting with these platforms to get at what they want, they’re going to be strategic about this, right. So people are always going to use tools in the ways that, sort of, work best for them, right, even if that’s not what the designer has in mind. And so when we’re designing algorithms, in a big way, we’re not necessarily designing solutions; we’re designing the rules of a game that people are going to end up playing with the platform or with each other.

HUIZINGA: Wow. 

LUCIER: And so a big part of, sort of, what we do in this area is that if we’re trying to understand the impact of, like, a technology change or a new platform that we’re going to design, we need to understand what it is that the users want and how they’re going to respond to that change when they interact with it. 

HUIZINGA: Right.

LUCIER: When we think about, sort of, microeconomic theory, a lot of this is, you know, ideas from game theory, ideas about how it is that humans make decisions, either on their own or in interaction with each other, right.

HUIZINGA: Yeah.

LUCIER: So when I’m in the marketplace, maybe I’m thinking not only about what’s best for me, but, sort of, I’m anticipating maybe what other people are going to be doing, as well. And I really need to be thinking about how the algorithms that make up the fundamentals of those marketplaces are going to influence the way people are thinking about not only what they’re doing but what other people are doing. 

HUIZINGA: Yeah, this is so fascinating because even as you started to list the things that we use algorithms—and we don’t even think about it—but we look for a ride, a job, a date. All of these things that are part of our lives have become algorithmic! 

LUCIER: Absolutely. And it’s fascinating that, you know, when we think about, you know, someone might launch a new algorithm, a new advance to these platforms, that looks on paper like it’s going to be a great improvement, assuming that people keep behaving the way they were behaving before. But of course, people will naturally respond, and so there’s always this moving target of trying to anticipate what it is that people actually are really trying to do and how they will adapt. 

HUIZINGA: We’re going to get into that so deep in a few minutes. But first, Mert, you are an assistant professor of economics at MIT’s famous Sloan School of Management, and your homepage tells us your research interests include industrial organization and econometrics. So unpack those interests for our listeners and tell us what you spend most of your time doing at the Sloan School. 

MERT DEMIRER: Thank you so much for having me. My name is Mert Demirer. I am an assistant professor at MIT Sloan, and I spend most of my time doing research and teaching MBAs. And in my research, I’m an economist, so I do research in a field called industrial organization. And the overarching theme of my research is firms and firm productivity. So in my research, I ask questions like, what makes firms more productive? What are the determinants of firm growth, or how do industries evolve over time? So what I do is I typically collect data from firms, and I use some econometric model or sometimes a model of industrial or the firm model and then I answer questions like these. And more recently, my research focused on new emerging technologies and how firms use these emerging technologies and what are the productivity effect of these new technologies. And I, more specifically, I did research on cloud computing, which is a really important technology … 

HUIZINGA: Yeah … 

DEMIRER: … transforming firms and industries. And more recently, my research focuses on AI, both, like, the adoption of AI and the productivity impact of AI. 

HUIZINGA: Right, right. You know, even as you say it, I’m thinking, what’s available data? What’s good data? And how much data do you need to make informed analysis or decisions? 

DEMIRER: So finding good data is a challenge in this research. In general, there are, like, official data sources like census or, like, census of manufacturers, which have been commonly used in productivity research. That data is very comprehensive and very useful. But of course, if you want to get into the details of, like, new technologies and, like, granular firm analysis, that’s not enough. So what I have been trying to do more recently is to find industry partners which have lots of good data on other firms. 

HUIZINGA: Gotcha. 

DEMIRER: So these are typically the main data sources I use. 

HUIZINGA: You know, this episode is part of a little series within a series we’re doing on AI, Cognition, and the Economy, and we started out with Abi Sellen from the Cambridge, UK, lab, who gave us an overview of the big ideas behind the initiative. And you’re going to give us some discussion today on AI and, specifically, the economy. But before we get into your current collaboration, let’s take a minute to “geolocate” ourselves in the world of economics and how your work fits into the larger AICE research framework. So, Brendan, why don’t you situate us with the “micro” view and its importance to this initiative, and then Mert can zoom out and talk about the “macro” view and why we need him, too. 

LUCIER: Yeah, sure. Yeah, I just, I just love this AICE program and the way that it puts all this emphasis on how human users are interacting with AI systems and tools, and this is really, like, a focal point of a lot of this, sort of, micro view, also. So, like, from this econ starting point of microeconomics, one place I think of is imagining how users would want to integrate AI tools into their day-to-day, right—into both their workflow as part of their jobs; in terms of, sort of, what they’re doing in their personal lives. And when we think about how new tools like AI tech, sort of, comes into those workflows, an even earlier question is, how is it that users are organizing what they do into individual tasks and, like, why are they doing them that way in the first place, right? So when we want to think about, you know, how it is that AI might come in and help them with pain points that they’re dealing with, we, sort of, need to understand, like, what it is they’re trying to accomplish and what are the goals that they have in mind. And this is super important when we’re trying to build effective tools because we need to understand how they’ll change their behavior or adjust to incorporate this new technology and trying to zoom into that view. 

HUIZINGA: Yeah. Mert, tell us a little bit more about the macro view and why that’s important in this initiative, as well. 

DEMIRER: Macro view is very complementary to micro view, and it takes a more holistic approach and analyzes the economy with its components rather than focusing on individual components. So instead of focusing on one component and analyze the collectivity effect of AI on a particular, like, occupation or sector, you just analyze this whole economy and you model the interactions between these components. And this holistic view is really essential if you want to understand AI because this is going to allow you to make, like, long-term projections and it’s going to help you understand how AI is going to affect, like, the entire economy. And to make things, like, more concrete—and going back to what Brendan said—that suppose you analyze a particular task or you figured out how AI saw the pain point and it increased the productivity by like x amount, so that impact on that occupation or, let’s say, the industry won’t be limited to that industry, right? The wage is going to change in this industry, but it’s going to affect other industries, potentially, like, labor from one industry which is affected significantly by AI to other industries, and, like, maybe new firms are going to emerge, some firms are going to exit, and so on. So this holistic view, it essentially models all of these components in just one system and also tries to understand the interactions between those. And as I said, this is really helpful because first of all, this helps you to make long-term projections about AI, how AI is going to impact the economy. And second, this is going to let you go beyond the first-order impact. Because you can essentially look at what’s going on and analyze or measure the first-order impact, but if you want to get the second- or third-order impact, then you need a framework or you need a bigger model. And typically, those, like, second- or third-order effects are typically the unintended effects or the hidden effects. 

HUIZINGA: Right. 

DEMIRER: And that’s why this, like, more holistic approach is useful, particularly for AI. 

HUIZINGA: Yeah, I got to just say right now, I feel like I wanted to sit down with you guys for, like, a couple hours—not with a microphone—but just talking because this is so fascinating. And Abi Sellen mentioned this term “line of sight” into future projections, which was sort of an AICE overview goal. Interestingly, Mert, when you mentioned the term productivity, is that the metric? Is productivity the metric that we’re looking to in terms of this economic impact? It seems to be a buzzword, that we need to be “more productive.” Is that, kind of, a framework for your thinking? 

DEMIRER: I think it is an important component. It’s an important component how we should analyze and think about AI because again, like, when you zoom into, like, the micro view of, like, how AI is going to affect my day-to-day work, that is, like, very natural to think that in terms of, like, productivity—oh, I saved, like, half an hour yesterday by using, like, AI. And, OK, that’s the productivity, right. That’s very visible. Like, that’s something you can see, something you can easily measure. But that’s only one component. So you need to understand how that productivity effect is going to change other things. 

HUIZINGA: Right! 

LUCIER: Like how I am going to spend the additional time, whether I’m going to spend that time for leisure or I’m going to do something else. 

HUIZINGA: Right. 

DEMIRER: In that sense, I think productivity is an important component, and maybe it is, like, the initial point to analyze these technologies. But we will definitely go beyond the productivity effect and understand how these, like, potential productivity effects are going to affect, like, the other parts of the economy and how the agents—like firms, people—are going to react to that potential productivity increase. 

HUIZINGA: Yeah, yeah, in a couple questions I’ll ask Brendan specifically about that. But in the meantime, let’s talk about how you two got together on this project. I’m always interested in that story. This question is also known as “how I met your mother.” And the meetup stories are often quite fun and sometimes surprising. In fact, last week, one person told his side of the story, and the other guy said, hey, I didn’t even know that! [LAUGHS] So, Brendan, tell us your side of who called who and how it went down, and then Mert can add his perspective. 

LUCIER: Great. So, yeah, so I’ve known Mert for quite some time! Mert joined our lab as a—the Microsoft Research New England lab—as an intern some years ago and then as a postdoc in, sort of, 2020, 2021. And so over that time, we got to know each other quite well, and I knew a lot about the macroeconomic work that Mert was doing. And so then, fast-forward to more recently, you know, this particular project initially started as discussions between myself and my colleague Nicole Immorlica at Microsoft Research and John Horton, who’s an economist at MIT who was visiting us as a visiting researcher, and we were discussing how the structure of different jobs and how those jobs break down into tasks might have an impact on how they might be affected by AI. And then very early on in that conversation, we, sort of, realized that, you know, this was really a … not just, like, a microeconomic question; it’s not just a market design question. The, sort of, the macroeconomic forces were super important. And then immediately, we knew, OK, like, Mert’s top of our list; we need, [LAUGHTER] we need, you know, to get Mert in here and talking to us about it. And so we reached out to him. 

HUIZINGA: Mert, how did you come to be involved in this from your perspective? 

DEMIRER: As Brendan mentioned, I spent quite a bit of time at Microsoft Research, both as an intern and as a postdoc, and Microsoft Research is a very, like, fun place to be as an economist and a really productive place to be as an economist because it’s very, like, interdisciplinary. It is a lot different from a typical academic department and especially an economics academic department. So my time at Microsoft Research has already led to a bunch of, like, papers and collaborations. And then when Brendan, like, emailed me with the research question, I thought it’s, like, no-brainer. It’s an interesting research question, like part of Microsoft Research. So I said, yeah, let’s do it! 

HUIZINGA: Brendan, let’s get into this current project on the economic impact of automation and generative AI. Such a timely and fascinating line of inquiry. Part of your research involves looking at a lot of current occupational data. So from the vantage point of microeconomic theory and your work, tell us what you’re looking at, how you’re looking at it, and what it can tell us about the AI future. 

LUCIER: Fantastic. Yeah, so in some sense, the idea of this project and the thing that we’re hoping to do is, sort of, get our hands on the long-term economic impact of generative AI. But it’s fundamentally, like, a super-hard problem, right? For a lot of reasons. And one of those reasons is that, you know, some of the effects could be quite far in the future, right. So this is things where the effects themselves but especially, like, the data we might look at to measure them could be years or decades away. And so, fundamentally, what we’re doing here is a prediction problem. And when we were trying to, sort of, look into the future this way, one way we do that is we try to get as much information as we can about where we are right now, right. And so we were lucky to have, like, a ton of information about the current state of the economy and the labor market and some short-term indicators on how generative AI seems to be, sort of, affecting things right now in this moment. And then the idea is to, sort of, layer some theory models on top of that to try to extrapolate forward, right, in terms of what might be happening, sort of get a glimpse of this future point. So in terms of the data we’re looking at right now, there’s this absolutely fantastic dataset that comes from the Department of Labor. It’s the O*NET database. This is the, you know, Occupational Information Network—publicly available, available online—and what it does is basically it breaks down all occupations across the United States, gives a ton of information about them, including—and, sort of, importantly for us—a very detailed breakdown of the individual tasks that make up the day-to-day in terms of those occupations, right. So, for example, if you’re curious to know what, like, a wind energy engineer does day-to-day, you could just go online and look it up, and so it basically gives you the entire breakdown. Which is fantastic. I mean, it’s, you know, I love, sort of, browsing it. It’s an interesting thing to do with an afternoon. [LAUGHTER] But from our perspective, the fact that we have these tasks—and it actually gives really detailed information about what they are—lets us do a lot of analysis on things like how AI tools and generative AI might help with different tasks. There’s a lot of analysis that we and, like, a lot of other papers coming out the last year have done in looking at which tasks do we think generative AI can have a big influence on and which ones less so in the present moment, right. And there’s been work by, you know, OpenAI and LinkedIn and other groups, sort of, really leaning into that. We can actually take that one step further and actually look also at the structure between tasks, right. So we can see not only, like, what fraction of the time I spend are things that can be influenced by generative AI but also how they relate to, like, my actual, sort of, daily goals. Like, when I look at the tasks I have to do, do I have flexibility in when and where I do them, or are things in, sort of, a very rigid structure? Are there groups of interrelated tasks that all happen to be really exposed to generative AI? And, you know, what does that say about how workers might reorganize their work as they integrate AI tools in and how that might change the nature of what it is they’re actually trying to do on a day-to-day basis? 

HUIZINGA: Right. 

LUCIER: So just to give an example, so, like, one of the earliest examples we looked at as we started digging into the data and testing this out was radiology. And so radiology is—you know, this is medical doctors that specialized in using medical imaging technology—and it happens to be an interesting example for this type of work because you know there are lots of tasks that make that up and they have a lot of structure to them. And it turns out when you look at those tasks, there’s interestingly, like, a big group of tasks that all, sort of, are prerequisites for an important, sort of, core part of the job, … 

HUIZINGA: Right … 

LUCIER: … which is, sort of, recommending a plan of which tests to, sort of, perform, right. So these are things like analyzing medical history and analyzing procedure requests, summarizing information, forming reports. And these are all things that we, sort of, expect that generative AI can be quite effective at, sort of, assisting with, right. And so the fact that these are all, sort of, grouped together and feed into something that’s a core part of the job really is suggestive that there’s an opportunity here to delegate some of those, sort of, prerequisite tasks out to, sort of, AI tools so that the radiologist can then focus on the important part, which is the actual recommendations that they can make. 

HUIZINGA: Right. 

LUCIER: And so the takeaway here is that it matters, like, how these tasks are related to each other, right. Sort of, the structure of, you know, what it is that I’m doing and when I’m doing them, right. So this situation would perhaps be very different if, as I was doing these tasks where AI is very helpful, I was going back and forth doing consulting with patients or something like this, where in that, sort of, scenario, I might imagine that, yeah, like an AI tool can help me, like, on a task-by-task basis but maybe I’m less likely to try to, like, organize all those together and automate them away. 

HUIZINGA: Right. Yeah, let me focus a little bit more on this idea of you in the lab with all this data, kind of, parsing out and teasing out the tasks and seeing which ones are targets for AI, which ones are threatened by AI, which ones would be wonderful with AI. Do you have buy-in from these exemplar-type occupations that they say, yes, we would like you to do this to help us? I mean, is there any of that collaboration going on with these kinds of occupations at the task level? 

LUCIER: So the answer is not yet. [LAUGHTER] But this is definitely an important part of the workflow. So I would say that, you know, ultimately, the goal here is that, you know, as we’re looking for these patterns across, like, individual exemplar occupations, that, sort of, what we’re looking for is relationships between tasks that extrapolate out, right. Across lots of different industries, right. So, you know, it’s one thing to be able to say, you know, a lot of very deep things about how AI might influence a particular job or a particular industry. But in some sense, the goal here is to see patterns of tasks that are repeated across lots of different occupations, across lots of different sectors that say, sort of, these are the types of patterns that are really amenable to, sort of, AI being integrated well into the workforce, whereas these are scenarios where it’s much more of an augmenting story as opposed to an automating story. But I think one of the things that’s really interesting about generative AI as a technology here, as opposed to other types of automated technology, is that while there are lots of aspects of a person’s job that can be affected by generative AI, there’s this relationship between the types of work that I might use an AI for versus the types of things that are, sort of, like the core feature of what I’m doing on a day-to-day. 

HUIZINGA: Right. Gotcha … 

LUCIER: And so, maybe it’s, like, at least in the short term, it actually looks quite helpful to say that, you know, there are certain aspects of my work, like going out and summarizing a bunch of heavy data reports, that I’m very happy to have an AI, sort of, do that part of my work. So then I can go and use those things forward in, sort of, the other half of my day. 

HUIZINGA: Yeah. And that’s to Mert’s point: look how much time I just saved! Or I got a half hour back! We’ll get to that in a second. But I really now am eager, Mert, to have you explain your side of this. Brendan just gave us a wonderful task-centric view of AI’s impact on specific jobs. I want you to zoom out and talk about the holistic, as you mentioned before, or macroeconomic view in this collaboration. How are you looking at the impact of AI beyond job tasks, and what role does your work play in helping us understand how these advances in AI might affect job markets and the economy writ large? 

DEMIRER: One thing Brendan mentioned a few minutes ago is this is a prediction task. Like, we need to predict what will be the effect of AI, how AI is going to affect the economy, especially in the long run. So this is a prediction problem that we cannot use machine learning, AI. Otherwise, it would have been a very easy problem to solve. 

HUIZINGA: Right … [LAUGHS] 

DEMIRER: So what you need instead is a model or, like, framework that will take, for example, inputs of, like, the productivity gains, for example, like Brendan talked about, or for, like, microfoundation as an input and then generate predictions for the entire economy. To do that, what I do in my research is I develop and use models of industries and firms. So these models essentially incorporate a bunch of economic agents. Like, this could be labor; this could be firms; this could be [a] policymaker who is trying to regulate the industry. And then you write down the incentives of these, like, different agents in the economy, and then you write down this model, you solve this model with the available data, and then this model gives you predictions. So you can, once you have a model like this, you can ask what would be the effect of a change in the economic environment on like wages, on productivity, on industry concentration, let’s say. So this is what I do in my research. So, like, I briefly mentioned my research on cloud computing. I think this is a very good example. When you think about cloud computing, always … everyone always, like, thinks about it helps you, like, scale very rapidly, which is true, and, like, which is the actual, like, the firm-level effect of cloud computing. But then the question is, like, how that is going to affect the entire industry, whether the industry is going to be more concentrated or less concentrated, it’s going to grow, like, faster, or which industry is going to grow faster, and so on. So essentially, in my research, I develop models like this to answer questions—these, like, high-level questions. And when it comes to AI, we have these, like, very detailed micro-level studies, like these exposure measures Brendan already mentioned, and the framework, the micro framework, we developed is a task view of AI. What you do is, essentially, you take the output of that micro model and then you feed it into a bigger economy-level model, and you develop a higher-level prediction. So, for example, you can apply this, like, task-based model on many different occupations. You can get a number for every occupation, like for occupation A, productivity will be 5 percent; for occupation B, it’s going to be like 10 percent; and so on. You can aggregate them at the industry level—you can get some industry-level numbers—you feed those numbers into a more, like, general equilibrium model and then you solve the model and then you answer questions like, what will be the effect of AI on wage on average? Or, like, what will be the effect of AI on, like, total output in the economy? So my research is, like, more on this answering, like, bigger industry-level or economic-level questions. 

HUIZINGA: Well, Brendan, one of our biggest fears about AI is that it’s going to “steal our jobs.” I just made air quotes on a podcast again. But this isn’t our first disruptive technology rodeo, to use a phrase. So that said, it’s the first of its kind. What sets AI apart from disruptive technologies of the past, and how can looking at the history of technological revolutions help us manage our expectations, both good and bad? 

LUCIER: Fantastic. Such an important question. Yeah, like there’s been, you know, just so much discussion and “negativity versus optimism” debates in the world in the public sphere … 

HUIZINGA: Hope versus hype … 

LUCIER: … and in the academic sphere … yeah, exactly. Hope versus hype. But as you say, yeah, it’s not our first rodeo. And we have a lot of historical examples of these, you know, disruptive, like, so-called general-purpose technologies that have swept through the economy and made a lot of changes and enabled things like electricity and the computer and robotics. Going back further, steam engine and the industrial revolution. You know, these things are revolutions in the sense that, you know, they sort of rearrange work, right. They’re not just changing how we do things. They change what it is that we even do, like just the nature of work that’s being done. And going back to this point of automation versus augmentation, you know, what that looks like can vary quite a bit from revolution to revolution, right. So sometimes this looks like fully automating away certain types of work. But in other cases, it’s just a matter of, sort of, augmenting workers that are still doing, in some terms, what they were doing before but with a new technology that, like, substantially helps them and either takes part of their job and makes it redundant so they can focus on something that’s, you know, more core or just makes them do what they were doing before much, much faster. 

HUIZINGA: Right. 

LUCIER: And either way, you know, this can have a huge impact on the economy and especially, sort of, the labor market. But that impact can be ambiguous, right. So, you know, if I make, you know, a huge segment of workers twice as productive, then companies have a choice. They can keep all the workers and have twice the output, or they can get the same output with half as many workers or something in between, and, you know, which one of those things happens depends not even so much on the technology but on, sort of, the broader economic forces, right. The, you know, the supply and demand and how things are going to come together in equilibrium, which is why this macroeconomic viewpoint is so important to actually give the predictions on, you know, how companies might respond to these changes that are coming through the new technology. Now, you know, where GenAI is, sort of, interesting as an example is the way that, you know, what types of work it impacts, right. So generative AI is particularly notable in that it impacts, you know, high-skill, you know, knowledge-, information-based work directly, right[1]. And it cuts across so many different industries. We think of all the different types of occupations that involve, you know, summarizing data or writing a report or writing emails. There’s so many different types of occupations where this might not be the majority of what they do, but it’s a substantial fraction of what they do. And so in many cases, you know, this technology—as we were saying before—can, sort of, come in and has the potential to automate out or at least really help heavily assist with parts of the job but, in some cases, sort of, leave some other part of the job, which is a core function. And so these are the places where we really expect this human-AI collaboration view to be especially impactful and important, right. Where we’re going to have lots of different workers in lots of different occupations who are going to be making choices on which parts of their work they might delegate to, sort of, AI agents and which parts of the work, you know, they really want to keep their own hands on. 

HUIZINGA: Right, right. Brendan, talk a little more in detail about this idea of low-skill work and high-skill work, maybe physical labor and robotics kind of replacements versus knowledge worker and mental work replacements, and maybe shade it a little bit with the idea of inequalities and how that’s going to play out. I mean, I imagine this project, this collaboration, is looking at some of those issues, as well? 

LUCIER: Absolutely. So, yeah, when we think about, you know, what types of work get affected by some new technology—and especially, sort of, automation technology—a lot of the times in the past, the sorts of work that have been automated out are what we’d call low-skill or, like, at least, sort of, more physical types of labor being replaced or automated by, you know, robotics. We think about the potential of manufacturing and how that displaces, like, large groups of workers who are, sort of, working in the factory manually. And so there’s a sense when this, sort of, happens and a new technology comes through and really disrupts work, there’s this transition period where certain people, you know, even if at the end of the day, the economy will eventually reach sort of new equilibrium which is generally more productive or good overall, there’s a big question of who’s winning and who’s losing both in the long term but especially in that short term, … 

HUIZINGA: Yeah! 

LUCIER: … sort of intermediate, you know, potentially very chaotic and disruptive period. And so very often in these stories of automation historically, it’s largely marginalized low-skill workers who are really getting affected by that transition period. AI—and generative AI in particular—is, sort of, interesting in the potential to be really hitting different types of workers, right. 

HUIZINGA: Right. 

LUCIER: Really this sort of, you know, middle sort of white-collar, information-work class. And so, you know, really a big part of this project and trying to, sort of, get this glimpse into the future is getting, sort of, this—again, as you said—line of sight on which industries we expect to be, sort of, most impacted by this, and is it as we might expect, sort of, those types of work that are most directly affected, or are there second- or third-order effects that might do things that are unanticipated? 

HUIZINGA: Right, and we’ll talk about that in a second. So, Mert, along those same lines, it’s interesting to note how new technologies often start out simply by imitating old technologies. Early movies were stage plays on film. Email was a regular letter sent over a computer. [LAUGHS] Video killed the radio star … But eventually, we realized that these new technologies can do more than we thought. And so when we talked before, you said something really interesting. You said, “If a technology only saves time, it’s boring technology.” What do you mean by that? And if you mean what I think you mean, how does the evolution—not revolution but evolution—of previous technologies serve as a lens for the affordances that we may yet get from AI? 

DEMIRER: Let me say first, technology that saves time is still very useful technology! [LAUGHTER] Who wouldn’t want a technology that will save time? 

HUIZINGA: Sure … 

DEMIRER: But it is less interesting for us, like, to study and maybe it’s, like, less interesting in terms of, like, the broader implications. And so why is that? Because if a technology saves time, then, OK, so I am going to have maybe more time, and the question is, like, how I’m going to spend that time. Maybe I’m going to have more leisure or maybe I’m going to have to produce more. It’s, like, relatively straightforward to analyze and quantify. So however, like, the really impactful technologies could allow us to accomplish new tasks that were previously impossible, and they should open up new opportunities for creativity. And I think here, this knowledge-worker impact of AI is particularly important because I think as a technology, the more it affects knowledge worker, the more likely it’s going to allow us to achieve new things; it’s going to allow us to create more things. So I think in that sense, I think generative AI has a huge potential in terms of making us accomplish new things. And to give you an example from my personal experience, so I’m a knowledge worker, so I do research, I teach, and generative AI is going to help my work, as well. So it’s already affecting … so it’s already saving me time. It’s making me more productive. So suppose that generative AI just, like, makes me 50 percent more productive, let’s say, like five years from now, and that’s it. That’s the only effect. So what’s going to happen to my job? Either I’m going to maybe, like, take more time off or maybe I’m going to write more of the same kind of papers I am writing in economics. But … so imagine, like, generative AI is helping me writing a different kind of paper. How is that possible? So I have a PhD in econ, and if I try really hard, maybe I can do another PhD. But that’s it. Like, I can specialize only one or, like, two topics. But imagine generative AI as an, like, agent or collaborator having PhD in, like, hundreds of different fields, and then you can, like, collaborate and, like, communicate and get information through generative AI on really different fields. That will allow me to do different kinds of research, like more interdisciplinary kinds of research. In that sense, I think the really … the most important part of generative AI is going to be this … what it will allow us to achieve new things, like what creative new things we are going to do. And I can give you a simple example. Like, we were talking about previous technologies. Let’s think of internet. So what was the first application of internet? It’s sending an email. It saves you time. Instead of writing things on a paper and, like, mailing it, you just, like, send it immediately, and it’s a clear time-saving technology. But what are the major implications for internet, like, today? It’s not email. It is like e-commerce, or it is like social media. It allows us to access infinite number of products beyond a few stores in our neighborhood, or it allows us to communicate or connect with people all around the world … 

HUIZINGA: Yeah … 

DEMIRER: … instead of, again, like limiting ourselves to our, like, social circle. So in that sense, I think we are currently in the “email phase” of AI, … 

HUIZINGA: Right … 

DEMIRER: … and we are going to … like, I think AI is going to unlock so many other new capabilities and opportunities, and that is the most exciting part. 

HUIZINGA: Clearly, one of the drivers behind the whole AICE research initiative is the question of what could possibly go wrong if we got everything right, and I want to anchor this question on the common premise that if we get AI right, it will free us from drudgery—we’ve kind of alluded to that—and free us to spend our time on more meaningful or “human”—more air quotes there—pursuits. So, Brendan, have you and your team given any thought to this idea of unintended consequences and what such a society might actually look like? What will we do when AI purportedly gives us back our time? And will we really apply ourselves to making the world better? Or will we end up like those floating people in the movie WALL-E

LUCIER: [LAUGHS] I love that framing, and I love that movie, so this is great. Yeah. And I think this is one of these questions about, sort of, the possible futures that I think is super important to be tackling. In the past, people, sort of, haven’t stopped working; they’ve shifted to doing different types of work. And as you’re saying, there’s this ideal future in which what’s happening is that people are shifting to doing more meaningful work, right, and the AI is, sort of, taking over parts of the, sort of, the drudgery, you know. These, sort of, annoying tasks that, sort of, I need to do as just, sort of, side effects of my job. I would say that where the economic theory comes in and predicts something that’s slightly different is that I would say that the economic theory predicts that people will do more valuable work in the sense that people will tend to be shifted in equilibrium towards doing things that complement what it is that the AI can do or doing things that the AI systems can’t do as well. And, you know, this is really important in the sense that, like, we’re building these partnerships with these AI systems, right. There’s this human-AI collaboration where human people are doing the things that they’re best at and the AI systems are doing the things that they’re best at. And while we’d love to imagine that, like, that more valuable work will ultimately be more meaningful work in that it’s, sort of, fundamentally more human work, that doesn’t necessarily have to be the case. You know, we can imagine scenarios in which I personally enjoy … there are certain, you know, types of routine work that I happen to personally enjoy and find meaningful. But even in that world, if we get this right and, sort of, the, you know, the economy comes at equilibrium to a place where people are being more productive, they’re doing more valuable work, and we can effectively distribute those gains to everybody, there’s a world in which, you know, this has the potential to be the rising tide that lifts all boats. 

HUIZINGA: Right. 

LUCIER: And so that what we end up with is, you know, we get this extra time, but through this different sort of indirect path of the increased standard of living that comes with an improved economy, right. And so that’s the sort of situation where that source of free time I think really has the potential to be somewhere where we can use it for meaningful pursuits, right. But there are a lot of steps to take to, sort of, get there, and this is why it’s, I think, super important to get this line of sight on what could possibly be happening in terms of these disruptions. 

HUIZINGA: Right. Brendan, something you said reminded me that I’ve been watching a show called Dark Matter, and the premise is that there’s many possible lives we could live, all determined by the choices we make. And you two are looking at possible futures in labor markets and the economy and trying to make models for them. So how do existing hypotheses inform where AI is currently headed, and how might your research help predict them into a more optimal direction? 

LUCIER: Yeah, that’s a really big question. Again, you know, as we’ve said a few times already, there’s this goal here of getting this heads-up on which segments of the economy can be most impacted. And we can envision these better futures as the economy stabilizes, and maybe we can even envision pathways towards getting there by trying to address, sort of, the potential effects of inequality and the distribution of those gains across people. But even in a world where we get all those things right, that transition is necessarily going to be disruptive, right. 

HUIZINGA: Right. 

LUCIER: And so even if we think that things are going to work out well in the long term, in the short term, there’s certainly going to be things that we would hope to invest in to, sort of, improve for everyone. And so even in a world where we believe, sort of, the technology is out there and we really think that people are going to be using it in the ways that make most sense to them, as we get hints about where these impacts can be largest, I think that an important value there is that it lets us anticipate opportunities for responsible stewardship, right. So if we can see where there’s going to be impact, I think we can get a hint as to where we should be focusing our efforts, and that might look like getting ahead of demand for certain use cases or anticipating extra need for, you know, responsible AI guardrails, or even just, like, understanding, you know, [how] labor market impacts can help us inform policy interventions, right. And I think that this is one of the things that gets me really excited about doing this work at Microsoft specifically. Because of how much Microsoft has been investing in responsible AI, and, sort of, the fundamentals that underlie those guardrails and those possible actions means that we, sort of, in this company, we have the ability to actually act on those opportunities, right. And so I think it’s important to really, sort of, try to shine as much light as possible on where we think those will be most effective. 

HUIZINGA: Yeah. Mert, I usually ask my guests on Collaborators where their research is on the spectrum from “lab to life,” but this isn’t that kind of research. We might think of it more in terms of “lab for life” research, where your findings could actually help shape the direction of the product research in this field. So that said, where are you on the timeline of this project, and do you have any learnings yet that you could share with us? 

DEMIRER: I think the first thing I learned about this project is it is difficult to study AI! [LAUGHTER] So we are still in, like, the early stages of the project. So we developed this framework we talked about earlier in the podcast, and now what we are doing is we are applying that framework to a few particular occupations. And the challenge we had is these occupations, when you just describe them, it’s like very simple, but when you go to this, like, task view, it’s actually very complex, the number of tasks. Sometimes we see in the data, like, 20, 30 tasks they do, and the relationship between those tasks. So it turned out to be more difficult than I expected. So what we are currently doing is we are applying the framework to a few specific tasks which help us understand how the model works and whether the model needs any adjustment. And then the goal is once we understand the model on any few specific cases, we’ll scale that up. And then we are going to develop these big predictions on the economy. So we are currently not there yet, but we are hoping to get there pretty soon. 

HUIZINGA: And just to, kind of, follow up on that, what would you say your successful outcome of this research would be? What’s your artifact that you would deliver from this project as collaboration? 

DEMIRER: So ultimately, our goal is to develop predictions that will inform the trajectory the AI is taking, that’s going to inform, like, the policy. That’s our goal, and if we generate that output, and especially if it informs policy of how firms or different agents of the economy adopt AI, I think that will be the ideal output for this project. 

HUIZINGA: Yeah. And what you’ve just differentiated is that there are different end users of your research. Some of them might be governmental. Some of them might be corporate. Some of them might even be individuals or even just layers of management that try to understand how this is working and how they’re working. So wow. Well, I usually close each episode with some future casting. But that basically is what we’ve been talking about this whole episode. So I want to end instead by asking each of you to give some advice to researchers who might be just getting started in AI research, whether that’s the fields that develop the technology itself or the fields that help define its uses and the guardrails we put around it. So what is it important for us to pay attention to right now, and what words of wisdom could you offer to aspiring researchers? I’ll give you each the last word. Mert, why don’t you go first? 

DEMIRER: My first advice will be use AI yourself as much as possible. Because the great thing about AI is that everyone can access this technology even though it’s a very early stage, so there’s a huge opportunity. So I think if you want to study AI, like, you should use it as much as possible. That personally allows me to understand the technology better and also develop research questions. And the second advice would be to stay up to date with what’s happening. This is a very rapidly evolving technology. There is a new product, new use case, new model every day, and it’s hard to keep up. And it is actually important to distinguish between questions that won’t be relevant two months from now versus questions that’s going to be important five years from now. And that requires understanding how the technology is evolving. So I personally find it useful to stay up to date with what’s going on. 

HUIZINGA: Brendan, what would you add to that? 

LUCIER: So definitely fully agree with all of that. And so I guess I would just add something extra for people who are more on the design side, which is that when we build, you know, these systems, these AI tools and guardrails, we oftentimes will have some anticipated, you know, usage or ideas in our head of how this is going to land, and then there’ll always be this moment where it, sort of, meets the real users, you know, the humans who are going to use those things in, you know, possibly unanticipated ways. And, you know, this can be oftentimes a very frustrating moment, but this can be a feature, not a bug, very often, right. So the combined insight and effort of all the users of a product can be this, like, amazing strong force. And so, you know, this is something where we can try to fight against it or we can really try to, sort of, harness it and work with it, and this is why it’s really critical when we’re building especially, sort of, user-facing AI systems, that we design them from the ground up to be, sort of, collaborating, you know, with our users and guiding towards, sort of, good outcomes in the long term, you know, as people jointly, sort of, decide how best to use these products and guide towards, sort of, good usage patterns. 

[MUSIC] 

HUIZINGA: Hmmm. Well, Brendan and Mert, as I said before, this is timely and important research. It’s a wonderful contribution to the AICE research initiative, and I’m thrilled that you came on the podcast today to talk about it. Thanks for joining us. 

LUCIER: Thank you so much. 

DEMIRER: Thank you so much. 

[MUSIC FADES] 


[1] (opens in new tab) For more information, Lucier notes two resources about the economic impact of GenAI: GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models (opens in new tab) and Preparing the Workforce for Generative AI (opens in new tab)

The post Collaborators: AI and the economy with Brendan Lucier and Mert Demirer appeared first on Microsoft Research.

Read More

Players, creators, and AI collaborate to build and expand rich game narratives

Players, creators, and AI collaborate to build and expand rich game narratives

This paper was presented at the IEEE 2024 Conference on Games (opens in new tab) (IEEE CoG 2024), the leading forum on innovation in and through games.

Player-Driven Emergence in LLM-Driven Game Narrative,” presented at IEEE CoG 2024

In the fast-evolving landscape of video game development, crafting dialogues and narratives is a labor-intensive endeavor. Traditionally, creating these elements involved meticulous hand-coding, resulting in static interactions that limit player agency. However, the rise of large language models (LLMs) is introducing possibilities for richer, more dynamic narrative experiences and automating some of the more challenging aspects of game creation. Despite this advance, a key challenge with using LLMs for narrative design in games is that, without human intervention, they tend to repeat patterns.

We address this in our paper, “Player-Driven Emergence in LLM-Driven Game Narrative,” presented at IEEE CoG 2024, where we explore how LLMs can foster unique forms of creativity when players participate in the design process. Rather than replacing designers, LLMs can empower players with considerable freedom in their interactions with nonplayer characters (NPC)—characters not controlled by the players but crucial for gameplay. These interactions provide implicit feedback for designers, offering insights unattainable with traditional dialogue trees—a branching structure of player dialogue choices affecting the narrative.

Creating and designing “Dejaboom!”

To test this hypothesis, we developed a text-adventure game called “Dejaboom!” The game’s premise involves a player waking up at home with déjà vu, recalling an explosion in their village from the day before. The objective is to relive the day and prevent the disaster. Players interact with five NPCs in the village. After a set number of steps, the bomb explodes, causing the player to lose all the items they gathered but retain memories of the NPC interactions. Figure 1 illustrates the game design.

Figure 1 (game design): The figure shows the map of the village where the game takes place. It shows the various locations that the player can explore, including home, park, restaurant, library, blacksmith’s shop, and town hall. It also shows the streets connecting the various locations. In addition to these, there are also two hidden rooms, namely a lab connected to the library and a storage room connected to the blacksmith’s shop. There are several objects placed at various locations that the player can pick up and use. There is a water bucket at home, a redstone torch in the park, shears in the blacksmith’s shop, a journal in the library, a map in the townhall, and a bomb in the storage room. There are five NPCs in the game that the player can interact with. There is Chef Maria in the restaurant, Mrs. Thompson on the residential street, Mad Hatter in the park, Merlin in the lab and Moriarty in the town hall.
Figure. 1. A map of the village shows the locations, objects, and NPCs.

We built the game using TextWorld, an open-source, extensible engine for text adventure games, modifying it to include dialogue with NPCs through OpenAI’s GPT-4 model. TextWorld provided the core game logic, while GPT-4 allowed for dynamic input and output—including both game feedback and NPC responses. Figure 2 illustrates our implementation of the game. In a conventional text game, this setup would allow only a fixed set of player commands and offer a predefined set of game responses. However, the use of GPT-4 allows the game’s input and output to be dynamic.

Figure 2 (game implementation): The figure depicts the implementation of the Dejaboom game. When a player issues a text command, it is first processed by an LLM which classifies it as either an action or words. If it is an action (for example “chase the birds”), then it goes to the fixed game agent which generates a fixed game response (example “this verb is not recognizable”). This response is taken in by another instance of the LLM which generates a more palatable natural language response (example “You tried to chase the birds, but nothing happened”) which is then shown to the player as the game feedback. If the player's text command is classified as words by the LLM classifier (example “can I see your menu”), then it goes to the second instance of the LLM which generates an appropriate NPC response that gets shown to the player (example “Chef Maria: Of course! Our menu today features a delicious selection of Italian-American fusion dishes”).
Figure 2: In our implementation of the game, the user’s commands are classified by GPT-4 as actions or words. Actions are processed by the game agent, while words trigger GPT-4 to generate contextually appropriate NPC responses.

About Microsoft Research

Advancing science and technology to benefit humanity


Narrative analysis and user study

Our goal was to identify narrative paths that players create and how they diverge from the designer’s original narrative. We used GPT-4 to transform player game logs into a narrative graph, where a node represents a player’s strategy at specific points and directed edges (arrows) show game progression. We compared these to a graph of the designer’s intended narrative. We defined emergent nodes as those that appear in the narrative graph of players but are not present in the original narrative graph. 

When we applied this approach to a user study with 28 gamers playing Dejaboom!, we found that players often introduced new strategies and elements, indicating a high level of creative engagement. Those generating the most emergent nodes tended to enjoy games that emphasize discovery, exploration, and experimentation, suggesting that such players are ideally suited for a collaborative approach to game development.

Figure 3 (narrative graph showing emergence): The figure shows a graph with nodes and edges. There are two types of nodes (blue nodes and green nodes). The blue nodes make up the initial narrative graph intended by the game designers whereas the green nodes indicate a few examples of the emergent nodes created by players implicitly through their gameplay. There is also a single start node and a single end node. A single path from the start node to the end node indicates one possible way to stop the explosion.
Figure 3: The single circles indicate the initial narrative graph intended by the designers. The double circles denote the emergent nodes created by players, representing creative new paths.

Implications and looking ahead

Our goal is to build methods that help empower game creators to create novel NPC experiences, design new narratives, and ultimately build entire new worlds through implicit player feedback and progressive application of advanced AI technologies. This work represents a foundational step, marking the start of a new paradigm of game development in which designers, players and generative AI models can collaboratively design and evolve games. Utilizing AI models introduces a new mechanism for capturing implicit player feedback through their emergent behaviors.

The post Players, creators, and AI collaborate to build and expand rich game narratives appeared first on Microsoft Research.

Read More

GENEVA uses large language models for interactive game narrative design

GENEVA uses large language models for interactive game narrative design

This paper was presented at the IEEE 2024 Conference on Games (opens in new tab) (IEEE CoG 2024), the leading forum on innovation in and through games.

IEEE 2024 Conference on Games recap blog

Mastering the art of storytelling, a highly valued skill across films, novels, games, and more, requires creating rich narratives with compelling plots and characters. In recent years, the rise of AI has prompted inquiries into whether large language models (LLMs) can effectively generate and sustain detailed, coherent storylines that engage audiences. Consequentially, researchers have been actively exploring AI’s potential to support creative processes in video game development, where the growing demands of narrative design often surpass the capabilities of traditional tools. This investigation focuses on AI’s capacity for innovation in storytelling and the necessary human interactions to drive such advances.

In this context, we introduce “GENEVA: GENErating and Visualizing branching narratives using LLMs (opens in new tab),” presented at IEEE CoG 2024. This graph-based narrative generation and visualization tool requires a high-level narrative description and constraints, such as the number of different starts, endings, and storylines, as well as context for grounding the narrative. GENEVA uses the generative capabilities of GPT-4 to create narratives with branching storylines and renders them in a graph format, allowing users to interactively explore different narrative paths through its web interface (opens in new tab).

Visualizing narratives using graphs

The narrative graph itself is a directed acyclic graph (DAG), where each node represents a narrative beat—an event that moves the plot forward—with directed edges (arrows) marking the progression through the story’s events. These beats are the fundamental units of the narrative structure, representing the exchange of action and reaction. A single path from a start node to an end node outlines a unique storyline, and the graph illustrates the various potential storylines based on the same overarching narrative. 

The generation and visualization of these narrative graphs are accomplished using GPT-4 in a two-step process. First, the model generates the branching storylines from the given description and constraints. Second, it produces code to render these narratives in a visually comprehensible graph format.

We detail this methodology in our paper, through a case study where we used GENEVA to construct narrative graphs for four well-known stories—Dracula, Frankenstein, Jack and the Beanstalk, and Little Red Riding Hood. Each was set in one of four distinct worlds: the game of Minecraft, the 21st century, ancient Rome, and the quantum realm. Figure 1 shows a narrative graph of Frankenstein set in the 21st century, and Figure 2 shows the storylines generated for this story.

Figure 1. A picture of a screenshot of the online interface of GENEVA. The screenshot has the title “Visualizing Generated Narratives”. Below the title are four dropdown menus, each for stories, number of starts, number of ends, number of plots and contexts. The values selected for the respective options are Frankenstein story with 1 start, 2 endings, 4 plots and set in the 21st century context. Besides that, there are two buttons, one that says, “show graph” and another that says, “show details”. Below these menu options, is a large graph with nodes and edges. The one orange node on the left is annotated as the start node and the two orange nodes on the right are annotated as the end nodes. The rest of the nodes are blue in color and each of them is annotated with a short phrase of about 3 to 4 words.
Figure 1: A narrative graph for the novel, Frankenstein, grounded in the 21st century. Additional constraints on the graph include one start, two endings, and four storylines.
Figure 2. A picture of a screenshot of the online interface of GENEVA. The screenshot has the title “Visualizing Generated Narratives”. Below the title are four dropdown menus, each for stories, number of starts, number of ends, number of plots and contexts. The values selected for the respective options are Frankenstein story with 1 start, 2 endings, 4 plots and set in the 21st century context. Besides that, there are two buttons, one that says, “show graph” and another that says, “hide details”. Below these menu options is a large text area with three storylines. Each storyline consists of a sequence of beats. Each beat has a unique number and a sentence describing the beat.
Figure 2: A detailed view of the four different storylines in the narrative graph in Figure 1.

microsoft research podcast

What’s Your Story: Weishung Liu

Principal PM Manager Weishung Liu shares how a career delivering products and customer experiences aligns with her love of people and storytelling and how—despite efforts to defy the expectations that come with growing up in Silicon Valley—she landed in tech.


Assessing GENEVA’s narrative adaptations

In our assessment, we found that GENEVA performed better in specific narrative contexts. For example, in Frankenstein’s adaptation to the 21st century, the storylines included themes like creating life from DNA fragments and genetic engineering, maintaining relevance while preserving the original story’s essence. However, upon closer examination, we noted areas for improvement, such as the need for more variety and better grounding of the narrative. Generally, stories that are better known and more thoroughly documented tend to yield richer and more varied adaptations.

Implications and looking forward

GENEVA remains a prototype, serving as a tool for exploring the narrative capabilities of LLMs. As these models evolve, we anticipate corresponding advances in their narrative generation abilities. The ultimate goal in game design is to engage players with compelling interactive experiences. With the skilled input of experienced game designers, tools like GENEVA could increasingly contribute to creating engaging gameplay experiences through iterative refinement of narrative paths.

Our collaboration with Xbox and Inworld AI (opens in new tab) continues to advance the use of AI in game development, incorporating these developments into practical tools for creators. Discover more about this transformative technology by watching this video (opens in new tab).

The post GENEVA uses large language models for interactive game narrative design appeared first on Microsoft Research.

Read More

What’s Your Story: Emre Kiciman

What’s Your Story: Emre Kiciman

What's Your Story podcast | Emre Kiciman

In the Microsoft Research Podcast series What’s Your Story, Johannes Gehrke explores the who behind the technical and scientific advancements helping to reshape the world. A systems expert whose 10 years with Microsoft spans research and product, Gehrke talks to members of the company’s research community about what motivates their work and how they got where they are today. 

In this episode, Gehrke is joined by Senior Principal Research Manager Emre Kiciman. Kiciman’s work in causal machine learning has resulted in tools for finding meaning in data, including the DoWhy library for modeling and testing causal assumptions, and his study of AI is focused on advancing toward systems that not only are more secure but are as positive in their impact as possible. In this episode, Kiciman shares how a side business pursued by his dad opened the door to computing; why his PhD adviser strongly recommended not using the words “artificial intelligence” in his thesis; and the moments that precipitated his moves from systems and networking to computational social science and now causal analysis and large-scale AI applications.

Emre Kiciman - panel of three photos from childhood

Learn more:

Emre Kiciman at Microsoft Research 

AI Controller Interface: Generative AI with a lightweight, LLM-integrated VM 
Microsoft Research blog, February 2024 

AICI: Prompts as (Wasm) Programs (opens in new tab) 
GitHub repo 

AI Frontiers: The future of causal reasoning with Emre Kiciman and Amit Sharma 
Microsoft Research Podcast, June 2023 

Modeling the Data-Generating Process is Necessary for Out-of-Distribution Generalization 
Publication, January 2023

An Open Source Ecosystem for Causal Machine Learning (opens in new tab) 
PyWhy.org 

U Rank Demo Screencast 
September 2008 

Transcript

[TEASER]     

[MUSIC PLAYS UNDER DIALOGUE]

EMRE KICIMAN: I think it’s really important for people to find passion and joy in the work that they do. At some point, do the work for the work’s sake. I think this will drive you through the challenges that you’ll inevitably face with any sort of project and give you the persistence that you need to really have the impact that you want to have. 

[TEASER ENDS]  

JOHANNES GEHRKE: Microsoft Research works at the cutting edge. But how much do we know about the people behind the science and technology that we create? This is What’s Your Story, and I’m Johannes Gehrke. In my 10 years with Microsoft, across product and research, I’ve been continuously excited and inspired by the people I work with, and I’m curious about how they became the talented and passionate people they are today. So I sat down with some of them. Now, I’m sharing their stories with you. In this podcast series, you’ll hear from them about how they grew up, the critical choices that shaped their lives, and their advice to others looking to carve a similar path.   

[MUSIC FADES] 


In this episode, I’m talking with Emre Kiciman, the senior principal research manager leading the AI for Industry research team at Microsoft Research Redmond. After completing a PhD in systems and networking in 2005, Emre began his career with Microsoft Research in the same area, studying reliability in large-scale internet services. Exposure to social data inspired him to refocus his research pursuits: his recent work in causal analysis—including DoWhy, a Python library for causal inference—is helping to connect the whats and whys in the abundance of data that exists. Meanwhile, his work with large language models is geared toward making AI systems more secure and maximizing their benefit to society. Here’s my conversation with Emre, beginning with some of his work at Microsoft Research and how he landed in computer science. 

GEHRKE: Welcome to What’s Your Story. So can you just tell us a little bit about what you do at MSR [Microsoft Research]?

KICIMAN: Sure. I work primarily on two areas at the moment, I guess. One is causal analysis, where we work on trying to answer cause-and-effect questions from data in a wide variety of domains, kind of, building that horizontal platform. And I work a lot recently, especially with this large language model focus, on the security of AI-driven systems: how do we make sure that these AI systems that we’re building are not opening up new vulnerabilities to attackers? 

GEHRKE: Super interesting. And maybe we can start out even before we go more in depth into that by, you know, how did you actually end up in computer science? I learned that you grew up in Berkeley. 

KICIMAN: Yeah, on average, I like to say.  

GEHRKE: On average? [LAUGHTER] 

KICIMAN: So I moved to the US with my parents when I was 2 years old, and we lived in El Cerrito, a small town just north of Berkeley. And then around middle school age, we moved to Piedmont, just south of Berkeley. So on average, yes, I grew up in Berkeley, and I did end up going there for college. And you asked about how I got into computer science. When I was probably around third or fourth grade, my dad, who was a civil engineer, decided that he wanted to start a business on the side, and he loved software engineering and wanted to build software to help automate a lot of the more cumbersome design tasks in the design of steel connections, and so he wrote … he bought a PC and brought it home and started working on his work. But then that was also my opportunity to learn what a computer was. 

GEHRKE: So that was your first computer? Was it an x86? 

KICIMAN: Yes, it was an IBM PC, the first x86, the one before the 286. And—it wasn’t the very original PC. It did have a CGA—color graphics adapter—so we could have four colors at once.  

GEHRKE: Nice. 

KICIMAN: And, yeah, that’s … it came with—luckily for me, I guess—it came with a BASIC manual. So reading that manual is how I learned how to program. 

GEHRKE: And this is the typical IBM white box with a monitor on top of it and a floppy drive, or how should I picture it? 

KICIMAN: Yeah, two floppy drives …  

GEHRKE: Two floppy drives? OK …  

KICIMAN: Two floppy drives, yeah, so you could copy from one to the other.  

GEHRKE: Five and a quarter or three and a half? 

KICIMAN: Five and a quarter, yeah, yeah. The loud, clickety-clack keyboard and, yeah, a nice monitor. So not the green and black; the one that could display the colors. And, yeah, had a lot of fun with programming. 

GEHRKE: So what were some of the first things that you wrote? 

KICIMAN: A lot of the first ones were just the examples from the book, the for loops, for example. But then after that, I started getting into some of the, you know, building, like, little mini painting tools. You know, you could move a cursor around the screen, click a button and paint to fill in a region, and then save the commands that you did to make graphics. Eventually, that actually turned into, like, a friend and I really enjoyed playing computer games, so we had in our mind we’re going to build a computer game. 

GEHRKE: Who doesn’t think that.  

KICIMAN: Of course, right? 

GEHRKE: Of course … 

KICIMAN: And so we had, like, a “choose your own adventure”–style program. I think we had maybe even four or five screens you could step through, right. And he was able to get some boxes, and we printed some manuals even. We had big plans, but then we didn’t know what to do, how to finish the game, how to get it out there, so … but we had a lot of fun.  

GEHRKE: Wow, that sounds amazing. 

KICIMAN: Really fond memories, yeah. 

GEHRKE: That sounds amazing. And then you went to Berkeley afterwards? Is that how you realized your passion, or how do you decide to study computer science?

KICIMAN: Yeah … so from that age, I was set on computing. I think my parents were a bit of a devil’s advocate. They wanted me to consider my options. So I did consider, like, mechanical engineering or industrial engineering in, like, maybe junior year of high school, but it never felt right. I went into computing, had a very smooth transition into Berkeley. They have a local program where students from the local high school can start to take college classes early. So I’d even started taking some computer classes and then just went right into my freshman year. 

GEHRKE: Sounds like a very smooth transition. Anything bumpy? Anything bumpy on the ride out there, or …?  

KICIMAN: Nothing really, nothing really bumpy. I had one general engineering class that somehow got on my schedule at 8 AM freshman year. 

GEHRKE: [LAUGHS] That’s a tough one.  

KICIMAN: That’s a tough one, yeah. And so there were a few weeks I didn’t attend class, and I knew there was a midterm coming up, so I show up. Because, you know, next week, there’s a midterm. I better figure out what they’re, what they’re learning. And I come in a couple minutes late because it’s, even though I’m intending to go, it’s still an 8 AM class. I show up a few minutes late, and everyone is heads down writing on pieces of paper. The whole room is quiet. And the TA gives me a packet and says, you might as well start now. “Oh no.” And I’m like freaking out. Like this is, this is a bad dream. [LAUGHS] And I’m flipping through … not only do I not know how to answer the questions; I don’t understand the questions, like the vocabulary. It’s only been three weeks. How did they learn so much? And then I noticed that it’s an open-book exam and I don’t have my book on top of it, like … but what I didn’t notice and what became apparent in about 20 minutes … the TA clapped his hands, and said, “All right, everyone, put it down. We’ll go over the answers now.” It was a practice. 

GEHRKE: Oh, lucky you. 

KICIMAN: Oh, my god, yes. So I did nothing but study for that exam for the next week and did fine on it. 

GEHRKE: So you didn’t have to drop the class or anything like that? 

KICIMAN: No, no, no. I studied enough that I did reasonably, you know, reasonably well.  

GEHRKE: At what point in time was it clear to you that you wanted to do a PhD or that you wanted to continue your studies? 

KICIMAN: I tried to explore a lot during my undergrad, so I did go off to industry for a summer internship. Super fun.  

GEHRKE: Where did you, where did you work?  

KICIMAN: It was Netscape. 

GEHRKE: Oh Netscape. 

KICIMAN: And it was a joint project with IBM. 

GEHRKE: Which year was that in? 

KICIMAN: This would have been ’90, around ’93.1 

GEHRKE: ’93 … OK, so the very early days of Netscape, actually. 

KICIMAN: Yeah, yeah. They were building Netscape Navigator 4, and the project I was on was Netscape Navigator for OS/2.  

GEHRKE: OK.

KICIMAN: IBM’s OS/2 had come out and was doing poorly against NT, and they wanted to raise its profile. And this team of 20 people were really just focused on getting this out there. And so I always thought of, you know—and I was an OS/2 user already, which is how I got onto that project. 

GEHRKE: OK … And how was the culture there, or …?  

KICIMAN: The culture, it’s what you would think of as a startup culture. You know, they gave out all their meals. There was lots of fun events. You know, dentists came into the parking lot like once a month or something like that. 

GEHRKE: Dentist?  

KICIMAN: There was, like, a yeah, it was, yeah, you know, everyone’s working too much at the office, so the company wanted to make things easy.  

GEHRKE: That sounds great. 

KICIMAN: But the next summer then, I did a research internship, a research assistantship, at Berkeley. I worked with Randy Katz and Eric Brewer and got into, you know, trying to understand cellphone networks and what they were thinking about, you know, cloud infrastructure for new cellular technologies. 

GEHRKE: And Eric Brewer, was he, at that point in time, already running Inktomi, or … ? 

KICIMAN: He was already running Inktomi. Yeah, yeah, he’d already started it. I don’t think it was public yet at the time, but maybe getting there.  

GEHRKE: OK. Well, this was right at the beginning when, like, all the, you know, cloud infrastructure was defined and, you know, a lot of the basics were set. So you did this internship then in your, after your junior year, the second one?  

KICIMAN: Yeah, after my junior year. It was then senior year, and it was time to apply for, you know, what’s going to come after college. And I knew it … after that assistantship at Berkeley, I knew I was going to go do a PhD. 

GEHRKE: So what is the thing about the internship that made you want to stay in research? 

KICIMAN: Oh, it’s just the … it gave a vision of the future. Like, we were playing with, like, you know, there were people in the lab playing with video over the internet and, you know, teleconferencing, and just seeing that, it felt like you were seeing into the future and diving deep technically across the stack in a way that the industry internship hadn’t done. And so that part of it and obviously lots of particulars. You know, lots of internships do go very deep in industry, as well, but that’s what struck me, is that, kind of, wanting to learn was the big driver.  

GEHRKE: And what excited you about systems as compared to something that’s more applications-oriented or more touching the user? I feel like systems you always have to have this, kind of, drive for infrastructure and for scale and for, you know, building the foundation as compared to, like, directly impacting the user. 

KICIMAN: I think the way I think about systems today—and I can’t remember what it was about systems then. I’d always done operating … like, operating systems was one of my first upper-division courses at Berkeley and everything. So, like, I certainly enjoyed it a lot. But the way I think about systems now—and I think I do bring systems thinking to a lot of the work I do, even in AI and responsible AI—is the way you structure software, it feels like you should be making a statement about what the underlying problem is, what is the component you should be building from an elegance or first-principles perspective. But really, it’s about the people who are going to be using and building and maintaining that system. You want to componentize it so that the teams who are going to be building the bigger thing can work independently, revise and update their software without having to coordinate every little thing. I think that’s where that systems thinking comes in for me, is what’s the right abstraction that’s going to decouple folks from each other. 

GEHRKE: That’s a really great analogy because the way it was once told to me was that systems is really about discovering the beauty in large software. Because once you touch the user, you, sort of, have to do whatever is necessary to, you know, make the user happy. But in the foundations, you should have simplicity; you should have ease; you should have elegance. Is that how you think about it? 

KICIMAN: I do think about those aspects, but it’s for a purpose. You know, you want the elegance and the simplicity so that you can have, you know, one team working on Layer 1 of the stack, another team working on Layer 2 of the stack, and you don’t want them to have to talk to each other every 10 minutes when they’re making any change to any line of code, right. And so thinking about, what is the more fundamental layer of abstraction that lets these people work on separate problems? That’s what’s important to me. And, of course, like, that then interplays with people’s interests and expertise. And as people’s expertise evolves, that might mean that that has implications for the design of your system.  

GEHRKE: And so you’re, OK, you’re an undergrad. You have done this research experience; you now apply. So now you go to grad school. Do you do anything fun between your undergrad and grad school? 

KICIMAN: No, I went straight in. 

GEHRKE: Right straight in?  

KICIMAN:  Right straight in. I did my PhD at Stanford. So I went, you know, a little way to school. 

GEHRKE: To a rival school, isn’t it? Isn’t it a big rival school? 

KICIMAN: To a rival school. Well, the undergrad school wins. I think that’s the general rule of thumb. But I did continue working with folks at Berkeley. So my adviser was also from Berkeley and so …  

GEHRKE: Who was your adviser? 

KICIMAN: My adviser was Armando Fox, …  

GEHRKE: OK, yeah. Mm-hmm.  

KICIMAN: and we had a … 

GEHRKE: Recovery-oriented computing? 

KICIMAN: Yes, exactly. Recovery-oriented computing. And the other person on the recovery-oriented computing project …  

GEHRKE: Dave Patterson …  

KICIMAN: … was Dave Patterson, yeah. 

GEHRKE: So it was really a true, sort of, Stanford-Berkeley joint project in a way?

KICIMAN: Yes, yeah. And that was my PhD. The work I did then was the first work to apply machine learning to the problem of fault detection and diagnosis in large-scale systems. I worked with two large companies—one of them was Amazon; one of them was anonymous—to test out these ideas in more realistic settings. And then I did a lot of open-source work with J2EE to demonstrate how you can trace the behavior of a system and build up models of its behavior and detect anomalies. Funnily enough, I know this is going to sound a little alien to us now maybe in today’s world: Dave and Armando would not let me use the phrase “artificial intelligence” anywhere in my thesis because they were worried I would not be able to get a job. 

GEHRKE: I see. Because that was, sort of, one of … I mean, AI goes through these hype cycles and then, you know, the winters again, and so this was one of the winter times? 

KICIMAN: This was definitely a wintertime. I was able to use the phrase “machine learning” in the body of the thesis, but I had to make up something about statistical monitoring for the title. 

GEHRKE: So what is the actual final title of your thesis, if you remember it? 

KICIMAN: “Statistical monitoring for fault detection and diagnosis in large-scale internet services” or something like that. 

GEHRKE: Makes sense. 

KICIMAN: Yeah. 

GEHRKE: So you replaced AI with statistical modeling and then everything [turned out all right]? 

KICIMAN: Yes, yeah. Everything … then it didn’t sound too hype-y. 

GEHRKE: And then after your PhD, you went straight to MSR, is that right? 

KICIMAN: Yeah. I mean, so here I’m coming out of my PhD with a focus on academic-style research for large-scale systems. Kind of boxed myself in a little bit. No university has a large-scale internet service, and most large-scale internet service companies don’t have research arms. So Microsoft Research was actually the perfect fit for this work. And when I got here, I started diving in and actually expanding a little bit and thinking about what are the end-to-end reliability issues with our services. So assume that the back end is running well. What else could go wrong that’s going to get in the way of the user? So I had one project going on, wide area network reliability with David Maltz, and one project …  

GEHRKE: Who is now CVP in Azure.  

KICIMAN: Who’s now, yeah, leading Azure network—the head of Azure networking. And one project on how we can monitor the behavior of our JavaScript applications that were just starting to become big. Like around then is when, you know, the first 10,000-line, 100,000-line-of-code JavaScript applications [were] appearing, and we had no idea whether they were actually running correctly, right? They’re running on someone else’s browser and someone else’s operating system. We didn’t know.  

GEHRKE: A big one at that point in time, I think was Gmail, right? This was, sort of, a really big one. But did we have any big ones in Microsoft? 

KICIMAN: Gmail was the first big one in the industry. 

GEHRKE: Hotmail, was it also Java, based in JavaScript? 

KICIMAN: Hotmail was not initially JavaScript based. The biggest one at that time was our maps. Not Bing maps, but whatever we called it.  

GEHRKE: MSN maps, or …  

KICIMAN: Probably something like that, yeah, yeah.  

GEHRKE: I see. And so you applied your techniques to that code base and tried to find a lot of bugs? 

KICIMAN: Yeah, this project was—and this was about data gathering, right, so I’m still thinking about it from the perspective of how do I analyze data to tell me what’s going on. We had data for the wide area network, but these web applications, we didn’t have any. So I’m, like, I’m going to build this infrastructure, collect the data, so that in a couple years, I can analyze it. And so what I wrote was a proxy that sat on the side of the IAS server and just dynamically instrumented all the JavaScript that got shipped out. And the idea was that no one user was going to pay the cost of the instrumentation, but everyone would pay a little small percentage, and then you could collect it in the back end to get the full complete picture.  

GEHRKE: Right. It’s so interesting because, I mean, in those days, right, you still thought maybe in terms of years and so on, right. I mean, you’ve said, well, I instrumented, then maybe in a year, I have some data. And today it happens that I instrument, and tomorrow I have enough data to make a decision on an A/B test and so on, right. It was a very different time, right. And also, it was probably a defining time for Microsoft because we moved into online services, right. We moved into large-scale internet services. So it must have been exciting to be in the middle of all of this. 

KICIMAN: It really was. I mean, there was a lot of change happening both inside Microsoft and outside Microsoft. That’s when … soon after this is when social networking started to become big, right. You started seeing Facebook and Twitter show up, and search became a bigger deal for Microsoft when we started investing in Windows Live and then Bing, and that’s actually … my manager, Yi-Min Wang, actually joined up with Harry Shum to create the Internet Services Research Center with the specific focus of helping Bing. And so that also shifted my focus a little bit and so had me looking more at some of the social data that would, kind of, take my trajectory on a little bit further.

GEHRKE: Right. I mean, so you’re unique in that, you know, people very often, they come in here and, you know, they’re specialists in systems, and they branch out within systems a little bit and, you know, of course, move with time. Maybe now they do, you know, AI infrastructure. But you have really moved quite a bit, right. I mean, you did your PhD on systems … I mean, systems and AI really, the way I understand it. Then you worked here a little bit more on systems in wide area and large-scale systems. But then, you know, you really became also an expert in causality and looked at, sort of, the social side. And now you, of course, have started to move very deeply into LLMs. So rather than talking about the topics itself, how do you decide? How do you make these decisions? How do you … you know, you’re a world expert on x, and how do you, in some sense, throw it all away and go to y? Do you decide one day, “I’m interested in y“? Do you, sort of, shift over time a little bit? How do you do it? 

KICIMAN: I’ve done it, I think, two or maybe three times, depending on if you count now, and some transitions have gone better than others. I think my transition from systems to social data and computational social science, it was driven by a project that we did for search at the time. Shuo Chen, another researcher here at Microsoft Research, built a web application that lets you give very concrete feedback back to Windows Live. You could drag and drop the results around and say, this is what I wanted it to look like. And this made, you know, feedback much more actionable and helped really understand DSATs and where they’re coming from. DSAT being dissatisfactions. And I looked at that and I was like, I want to be able to move search results around and share with my friends. And I, kind of, poked at Shuo, you know, asked him if he would build this, and he said no. He said he’s busy. So eventually, I—because I knew something about JavaScript applications—decided to just drop things and spend six months building out this application. So I built out this social search application where you could drag and drop search results around, share it with your friends, and we put it out, actually. We got it deployed as an external service. We had maybe 10,000 people kick the tires.  

GEHRKE: Within Microsoft or …?  

KICIMAN: No, externally.  

GEHRKE: OK.  

KICIMAN: Yeah. There was a great headline that, like, Google then fast followed with a similar feature, and the headline was like, Google fast follows, basically, on Microsoft. Our PR folks were very excited about that. I say this all … I mean, it’s all history now. But certainly, it was fun at the time. But now we’re … I’m giving this demo, this talk, about this prototype that we built and what we’re learning about, you know, what’s in people’s way, what’s friction, what do they like and not like, etc. And I’m standing up and, you know, giving this presentation, this demo, and someone says, hey could you, could you go back to, you know, go back in the browser? On the bottom right corner, it says Mike did something on this search page; he edited some search results. Could you click on that? I want to know what he did. I’m like, OK, yeah, sure. I click on it. And [it’s like], OK, that’s great. That’s, that’s really interesting. And this happened multiple times. Like, in a formal presentation, for someone to interrupt you and ask a personal question just out of their own curiosity, that’s what showed me … that’s what got me really thinking deeply about the value of this social data and, like, why is it locked up in a very specific interface. What else could you do with this data if it’s so engaging, so fascinating, that people are willing to interrupt a speaker for some totally irrelevant, basically, question? And that’s when I switched to really trying to figure out what to do with social data. 

GEHRKE: I see. So it was this, kind of, really personal experience of people being so excited about that social interaction on the demos that you’re giving. 

KICIMAN: Exactly. They cared about their friends and what their friends did, and that was super clear.

GEHRKE: So, so coming back, let’s go there in a second, but coming back to the story that you told, you said you had 10,000 external users. 

KICIMAN: Yeah.

GEHRKE: So I’m still, you know, also always trying to learn what we can do better because we sometimes have prototypes that are incredibly valuable. They’re prototypes that have fans; they’re prototypes that, you know, the fans even want to contribute. But then somehow, we get stuck in the middle; and they don’t scale, and they don’t become a business. What happened with that?

KICIMAN: Yeah. 

GEHRKE: Also in [retrospect], … 

KICIMAN: In retrospect … 

GEHRKE: … what, what … should we have done something different, or did it live up to its potential? 

KICIMAN: I think we learned something. I think that there were a couple of things we learned. One was that, you know, every extra click that people wanted to do, you know, took the number of interactions down by, you know, an order of magnitude. So starring something and bringing it to the top, that was very popular. Dragging and dropping? Little bit less so. Dragging and dropping from one search to a different search? So maybe I’ll search for, you know, “Johannes,” find your homepage, and then drag and drop it to, like, people’s, you know, publications list to, like, keep an eye on or something. Like that, almost never. And people were very wary about editing the page. Like, what if I make a mistake? What if it’s just, just me, like, who wants this, and I’m messing up search for the rest of the world? And it’s like, no, no, it’s just your friends, like just you and your friends who are going to see this. And so we learned a lot about people’s mental models and, like, what stood in the way of, you know, interactions on the web. There were lots of challenges to doing this at scale. I mean, we needed, for example, a way of tracking users. We needed a way of very quickly, within 100 milliseconds, getting information about a user’s past edits to search pages into, you know, into memory if we were going to do this for real on Windows Live. And we just didn’t have the infrastructure.

GEHRKE: I see. And those problems were hard in those days. 

KICIMAN: Yeah. A prototype is fine. People, you know, will handle a little bit of latency if it’s a research prototype, but for everyday use, you need something more. 

GEHRKE: And there was no push to try it, to land it somehow, or what … ?  

KICIMAN: There were big pushes, but the infrastructure, it was really … 

GEHRKE: I see. It was really an infrastructure problem, then? 

KICIMAN: Yeah, yeah. 

GEHRKE: OK. Interesting because it sounds to me like, wow, there’s an exciting research problem there; now you need the infrastructure to try to make all of these things really, really fast. It’s always fascinating to see, you know, where things get stuck and how they, how they proceed. 

KICIMAN: Yeah, I think it’d be a lot easier to build that—from an infrastructure point of view—today. But, of course, then there’s lots of other questions, like is this really what, you know, the best thing to do. Like I mentioned, Google had this fast follow feature. They also removed it afterwards, as well.  

GEHRKE: OK. Yeah, hindsight is always, you know, twenty-twenty. So, OK, so you’re now starting to move into social computing, right, and trying to understand more about social interactions between users. How did you end up in causality, and then how did you make the switch to LLMs? And maybe even more about this; I mean, I understand here this was, sort of, this personal story that you really saw that, you know, the audience was really asking you about what’s happening here and that, sort of, motivated you. Was it always this personal drive, or was it always others who pulled you? And how did you make these switches? 

KICIMAN: I think the switch from systems into social, it was about trying to get closer to problems that really mattered to people. I really enjoy working on systems problems, but oftentimes, they feel like they’re in the back end. And so I wanted something where, you know, even if I’m not the domain expert working on something, I can feel like I’m making a contribution to that problem. The transition with social data then into causality and, um, and LLMs, that was a bit smoother. So working with social data, trying to understand what it meant and what it said about the world in aggregate, was super-fascinating problems. So much information is embedded in the digital traces that people leave behind. But it was really difficult for people to come to solid conclusions. So there was one conference I went to where almost every presentation that day gave some fascinating insight. This is how people make friendships. This is how, you know, we’re seeing, like, signs of disease spread in, you know, through real-world interactions as they’re in social data. Here’s how people spend their time. And then people would, and then people would close; their conclusion slide every time was, “And, of course, correlation is not causation, so anything could actually be happening.” Like, that is such, that is such a bummer. Like, beautiful theory, great understanding. You spent so much time. I feel like I got some insight. And then you pull the rug out and say, but maybe not. And I’d heard about this work on … that there was work on causal analysis and that there were certain conditions and ways to get actual learned causal relationships from data. So that’s the day I decided I’m going to go figure out what that is and how to apply it to social data for these types of questions. And I went out, and the first work there was a collaboration with Munmun De Choudhury, faculty at Georgia Tech, looking at online traces related to mental health and suicidal ideation and trying to understand what some of the factors were in a more, in a more solid and causal fashion. And so this really became, like, this was … this interest in computational social science really ended up branching out into two areas. One, obviously, I’m caring about, what can we learn about the world? Part of this is, of course, thinking deeply about the implications of AI on society, like what is it going to mean that we have this data for all of these, you know, societal challenges? And then causality. So the AI and its implications on society is what led towards the work on the security of AI systems and now security of AI as it relates to large language models. And then causality was the other branch that split off from there. Both of them really stemming from this desire to see that we have a positive impact with AI.

GEHRKE: So you mentioned that, you know, you were sitting in these talks and people are talking about the correlation, and now you finally have this new tool, which is causation. So what are some of the examples where, you know, with correlation you came out with answer A, but now causation gave you some better, some real deep insights? 

KICIMAN: I haven’t gone looking to refute studies, so … 

GEHRKE: I see. OK.  

KICIMAN: … but there are many well-known studies in the past where people have made mistakes because they didn’t account for the right confounding variables. Ronny Kohavi has a great list of these on one of his websites. But a fun one is a study that came out in the late ’90s on the influence of night lights on myopia in children. So this was a big splash. I think it made it to like Newsweek or 60 Minutes and stuff, that if you have night lights in the house, your kids are more likely to need glasses. And this was wrong. 

GEHRKE: My parents told me all the time, don’t read in bed, you know, with your flashlight because your eyes are going to get bad. 

KICIMAN: Yes.  

GEHRKE: That’s the story basically, right? 

KICIMAN: This was, yeah, the night lights that plug in the wall.  

GEHRKE: But that’s the …  

KICIMAN: That’s the idea, the same thing. 

GEHRKE: The same thing, right. 

KICIMAN: And so these people analyzed a bunch of data, and they found that there was a correlation, and they said that, you know, it’s a cause; you know, this is a cause. And the problem was that they didn’t account for the parents’ myopia. Apparently, parents who had myopia were more likely to install night lights. And then you have the genetic factor then actually causing the myopia. Very simple. But, you know, people have to replicate this study to, you know, to realize it was a mistake. Others were things like correlations, I think, around vitamin C have been reported repeatedly and then refuted in randomized control trials. But there’s many of these. Medicine, in particular, has a long history of false correlations leading people astray. 

GEHRKE: Do you have a story where here at Microsoft your work in causation had a really big impact? 

KICIMAN: You know, the one—it’s still ongoing—but one of the ones that I’m really excited about now, and thinking also from the broader societal impact lens, is a collaboration with Ranveer Chandra and his group. So with a close collaborator at MSR India, Amit Sharma, we’ve developed a connection between representation learning and underlying causal representation of the data-generating process that’s driving something. So if you imagine, like, we want to learn a classifier on an object, on an image, and we want that classifier to generalize to other settings, there’s lots of reasons why this can go wrong. You know, you have, you know, like a classic example is the question of, is this picture showing you a camel, or is it showing you a cow? The classifier is much more likely to look at the background, and if it’s green grass, it’s probably a cow. If it’s sandy desert, it’s probably a camel. But then you fail if you look at a camel in the zoo or a cow on a beach, right. So how do you make sure that you’re looking at the real features? People have developed algorithms for these. But no algorithm actually is robust across all the different kinds of distribution shifts that people see in the real world. Some algorithms work on these kinds of distribution shifts. Some algorithms work on those kinds of distribution shifts. And it was a bit of an interesting, I think, puzzle as to why. And so we realized that these distribution shifts, if you look at them from a causal perspective, you can see that the algorithms are actually imposing different statistical independence constraints. And you can read those statistical independence constraints off of a causal graph. And the reason that some algorithms worked well in some settings was that the underlying causal graph implied a different set of statistical independence constraints in that setting. And so that algorithm was the right one for that setting. If you have a different causal graph with different statistical independence constraints, the other algorithm was better. And so now you can see that no one algorithm is going to work well across all of them. So we built an adaptive algorithm that looks at the causal graph, picks the right statistical independencies, and applies them, and now what we’re doing with this algorithm is we’re applying it to satellite imagery to help us build a more generalizable, more robust model of carbon in farm fields so we can remotely sense and predict what the carbon level is in a field. And so, the early results …  

GEHRKE: And that’s important for what?

KICIMAN: And so this is important because soil is seen as a very promising method for sequestering carbon for a climate change perspective. And it’s also the more carbon there is … the higher your soil carbon, usually the healthier the soil is, as well. It’s able to absorb more water, so less flooding; your crops are more productive because of the microbial growth that’s happening. And so people want to adopt policies and methods that increase the soil carbon in the fields for all of these reasons. But measuring soil carbon is really intensive. You have to go sample it, take it off to a lab, and it’s too expensive for people to do regularly. And so if we can develop remote-sensing methods that are able to take a satellite image and, you know, really robustly predict what the real soil carbon measurement would be, that’s really game changing. That’s something that, you know, will help us evaluate policies and whether they’re working; help us evaluate, you know, what the right practices should be for a particular field. So I’m really excited about that.  

GEHRKE: That’s really exciting. You’d mentioned when we talked before that you’d benefited in your career from several good mentors. How do you think about mentoring, and what are the ways that you benefited from it? And how do you, you know, live that now in your daily life as you’re a mentor now to the next generation? 

KICIMAN: Yeah, the way I look at all the people—and there’s so many—who have, you know, given me a hand and advice and, you know, along the way, I often find I pick up on some attributes of my mentors, of a particular mentor, and find that it’s something that I want to emulate. So recognizing, you know, everyone is complicated and no one is perfect, but, you know, there’s so many ways that, you know, individuals get things right and trying to understand what it is that they’re doing right and how I can try and repeat that for, like, you said, the next generation, I think, is really, really important. It’s like one story, for example, around 2008, while I was still working on large-scale internet services, I was going around the company to, kind of, get a sense of, you know, what’s the current state of the reliability of our services and how we architect them and run them. And so I was talking to developers and architects and Ops folks around the company, and James Hamilton was a great mentor at that moment, helping me to connect, helping suggest questions that I might ask. 

GEHRKE: So he was working on SQL Server reliability, right, at that point in time or on Windows reliability? 

KICIMAN: He was already starting to move over into datacenter reliability. I think at the time, right before he moved over to the research side of things, I think he was one of the heads of the, of our enterprise email businesses, and then he came over to research to focus on, I think, datacenters in general. And, yeah, and he just donated so much of his time. He was so generous with, you know, reviewing this large report that I was writing and just helping me out with insights. That struck me as, like … he’s a very busy person. He’s doing all this stuff, and he’s spending, you know, I sent him an email with, you know, 15 pages, and he responds with feedback within a couple of hours every morning. That was astonishing to me, especially in hindsight, and so … but that kind of generosity of time and trying to help direct people’s work in a way that’s going to be most impactful for what they want to achieve, that’s something I try and emulate today. 

GEHRKE: So, so, you know, you’ve benefited from a lot of great mentors and you said you’re now also a mentor to others. Do you have any last piece of advice for any of our listeners? 

KICIMAN: I think it’s really important for people to find passion and joy in the work that they do and, at some point, do the work for the work’s sake. I think this will drive you through the challenges that you’ll inevitably face with any sort of project and give you the persistence that you need to really have the impact that you want to have. 

GEHRKE: Well, thanks for that advice. And thanks for being in What’s Your Story, Emre. 

KICIMAN: Thanks very much, Johannes. Great to be here.  

[MUSIC] 

To learn more about Emre or to see photos of Emre as a child in California, visit aka.ms/ResearcherStories. 

[MUSIC FADES] 


[1] Kiciman later noted the year he interned at Netscape was 1997. 

The post What’s Your Story: Emre Kiciman appeared first on Microsoft Research.

Read More

Research Focus: Week of July 29, 2024

Research Focus: Week of July 29, 2024

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft.

Research Focus: July 22, 2024

Scalable Differentiable Causal Discovery in the Presence of Latent Confounders with Skeleton Posterior

Differentiable causal discovery has made significant advancements in the learning of directed acyclic graphs. However, its application to real-world datasets remains restricted due to the ubiquity of latent confounders and the requirement to learn maximal ancestral graphs (MAGs). Previous differentiable MAG learning algorithms have been limited to small datasets and failed to scale to larger ones (e.g., with more than 50 variables).

In a recent paper: Scalable Differentiable Causal Discovery in the Presence of Latent Confounders with Skeleton Posterior, researchers from Microsoft and external colleagues explore the potential for causal skeleton, which is the undirected version of the causal graph, to improve accuracy and reduce the search space of the optimization procedure, thereby enhancing the performance of differentiable causal discovery. They propose SPOT (Skeleton Posterior-guided OpTimization), a two-phase framework that harnesses skeleton posterior for differentiable causal discovery in the presence of latent confounders.

Extensive experiments on various datasets show that SPOT substantially outperforms state-of-the-art methods for MAG learning. SPOT also demonstrates its effectiveness in the accuracy of skeleton posterior estimation in comparison with non-parametric bootstrap-based, or more recently, variational inference-based methods. The adoption of skeleton posterior exhibits strong promise in various causal discovery tasks.


Evaluating the Feasibility of Visual Imagery for an EEG-Based Brain–Computer Interface

Brain signals recorded via non-invasive electroencephalography (EEG) could help patients with severe neuromuscular disorders communicate with and control the world around them. Brain-computer interface (BCI) technology could use visual imagery, or the mental simulation of visual information from memory, as an effective control paradigm, directly conveying the user’s intention.

Initial investigations have been unable to fully evaluate the capabilities of true spontaneous visual mental imagery. One major limitation is that the target image is typically displayed immediately preceding the imagery period. This paradigm does not capture spontaneous mental imagery, as would be necessary in an actual BCI application, but something more akin to short-term retention in visual working memory.

In a recent paper: Evaluating the Feasibility of Visual Imagery for an EEG-Based Brain–Computer Interface, researchers from Microsoft and external colleagues show that short-term visual imagery following the presentation of a specific target image provides a stronger, more easily classifiable neural signature in EEG than spontaneous visual imagery from long-term memory following an auditory cue for the image. This research, published in IEEE Transactions on Neural Systems and Rehabilitation Engineering, provides the first direct comparison of short-term and long-term visual imagery tasks and provides greater insight into the feasibility of using visual imagery as a BCI control strategy.

Spotlight: Event Series

Microsoft Research Forum

Join us for a continuous exchange of ideas about research in the era of general AI. Watch the first three episodes on demand.


Evolving Roles and Workflows of Creative Practitioners in the Age of Generative AI

Many creative practitioners – designers, software developers, and architects, for example – are using generative AI models to produce text, images, and other assets. While human-computer interaction (HCI) research explores specific generative AI models and creativity support tools, little is known about practitioners’ evolving roles and workflows with models across a project’s stages. This knowledge could help guide the development of the next generation of creativity support tools.

In a recent paper: Evolving Roles and Workflows of Creative Practitioners in the Age of Generative AI, researchers from Microsoft and the University of California-San Diego, contribute to this knowledge by employing a triangulated method to capture information from interviews, videos, and survey responses of creative practitioners reflecting on projects they completed with generative AI. Their observations help uncover a set of factors that capture practitioners’ perceived roles, challenges, benefits, and interaction patterns when creating with generative AI. From these factors, the researchers offer insights and propose design opportunities and priorities that serve to encourage reflection from the wider community of creativity support tools and generative AI stakeholders, such as systems creators, researchers, and educators, on how to develop systems that meet the needs of creatives in human-centered ways.


“It’s like a rubber duck that talks back”: Understanding Generative AI-Assisted Data Analysis Workflows through a Participatory Prompting Study

End-user tools based on generative AI can help people complete many tasks. One such task is data analysis, which is notoriously challenging for non-experts, but also holds much potential for AI. To understand how data analysis workflows can be assisted or impaired by generative AI, researchers from Microsoft conducted a study using Bing Chat via participatory prompting, a newer methodology in which users and researchers reflect together on tasks through co-engagement with generative AI. The recent paper: “It’s like a rubber duck that talks back”: Understanding Generative AI-Assisted Data Analysis Workflows through a Participatory Prompting Study, demonstrates the value of the participatory prompting method. The researchers found that generative AI benefits the information foraging and sensemaking loops of data analysis in specific ways, but also introduces its own barriers and challenges, arising from the difficulties of query formulation, specifying context, and verifying results. Based on these findings, the paper presents several implications for future AI research and the design of new generative AI interactions.

The post Research Focus: Week of July 29, 2024 appeared first on Microsoft Research.

Read More