Ideas: Bug hunting with Shan Lu

Ideas podcast | illustration of Shan Lu

Behind every emerging technology is a great idea propelling it forward. In the Microsoft Research Podcast series Ideas, members of the research community at Microsoft discuss the beliefs that animate their research, the experiences and thinkers that inform it, and the positive human impact it targets.

In this episode, host Gretchen Huizinga talks with Shan Lu, a senior principal research manager at Microsoft. As a college student studying computer science, Lu saw classmates seemingly learn and navigate one new programming language after another with ease while she struggled. She felt like she just wasn’t meant to be a programmer. But this perceived lack of skill turned out to be, as an early mentor pointed out when she began grad school, what made Lu an ideal bug hunter. It’s a path she’s pursued since. After studying bugs in concurrent systems for more than 15 years—she and her coauthors built a tool that identified over a thousand in a 2019 award-winning paper—Lu is focusing on other types of code defects. Recently, Lu and collaborators combined traditional program analysis and large language models in the search for retry bugs, and she’s now exploring the potential role of LLMs in verifying the correctness of large software systems.

Learn more:

If at First You Don’t Succeed, Try, Try, Again…? Insights and LLM-informed Tooling for Detecting Retry Bugs in Software Systems
Publication, November 2024

Abstracts: November 4, 2024
Microsoft Research Podcast, November 2024

Automated Proof Generation for Rust Code via Self-Evolution 
Publication, October 2024

AutoVerus: Automated Proof Generation for Rust Code
Publication, September 2024

Efficient and Scalable Thread-Safety Violation Detection – Finding thousands of concurrency bugs during testing
Publication, October 2019

Learning from Mistakes: A Comprehensive Study on Real World Concurrency Bug Characteristics
Publication, March 2008 

Verus: A Practical Foundation for Systems Verification
Publication, November 2024

Transcript

[TEASER] [MUSIC PLAYS UNDER DIALOGUE]

SHAN LU: I remember, you know, those older days myself, right. That is really, like, I have this struggle that I feel like I can do better. I feel like I have ideas to contribute. But just for whatever reason, right, it took me forever to learn something which I feel like it’s a very mechanical thing, but it just takes me forever to learn, right. And then now actually, I see this hope, right, with AI. You know, a lot of mechanical things that can actually now be done in a much more automated way, you know, by AI, right. So then now truly, you know, my daughter, many girls, many kids out there, right, whatever, you know, they are good at, their creativity, it’ll be much easier, right, for them to contribute their creativity to whatever discipline they are passionate about.

[TEASER ENDS]

GRETCHEN HUIZINGA: You’re listening to Ideas, a Microsoft Research Podcast that dives deep into the world of technology research and the profound questions behind the code. I’m Gretchen Huizinga. In this series, we’ll explore the technologies that are shaping our future and the big ideas that propel them forward.

[MUSIC FADES]

Today I’m talking to Shan Lu, a senior principal research manager at Microsoft Research and a computer science professor at the University of Chicago. Part of the Systems Research Group, Shan and her colleagues are working to make our computer systems, and I quote, “secure, scalable, fault tolerant, manageable, fast, and efficient.” That’s no small order, so I’m excited to explore the big ideas behind Shan’s influential research and find out more about her reputation as a bug bounty hunter. Shan Lu, welcome to Ideas!


SHAN LU: Thank you.

HUIZINGA: So I like to start these episodes with what I’ve been calling the “research origin story,” and you have a unique, almost counterintuitive, story about what got you started in the field of systems research. Would you share that story with our listeners?

LU: Sure, sure. Yeah. I grew up fascinating that I will become mathematician. I think I was good at math, and at some point, actually, until, I think, I entered college, I was still, you know, thinking about, should I do math? Should I do computer science? For whatever reason, I think someone told me, you know, doing computer science will help you; it’s easier to get a job. And I reluctantly pick up computer science major. And then there was a few years in my college, I had a really difficult time for programming. And I also remember that there was, like, I spent a lot of time learning one language—we started with Pascal—and I feel like I finally know what to do and then there’s yet another language, C, and another class, Java. And I remember, like, the teacher will ask us to do a programming project, and there are times I don’t even, I just don’t know how to get started. And I remember, at that time, in my class, I think there were … we only had like four girls taking this class that requires programming in Java, and none of us have learned Java before. And when we ask our classmates, when we ask the boys, they just naturally know what to do. It was really, really humiliating. Embarrassing. I had the feeling that, I felt like I’m just not born to be a programmer. And then, I came to graduate school. I was thinking about, you know, what kind of research direction I should do. And I was thinking that, oh, maybe I should do theory research, like, you know, complexity theory or something. You know, after a lot of back and forth, I met my eventual adviser. She was a great, great mentor to me, and she told me that, hey, Shan, you know, my group is doing research about finding bugs in software. And she said her group is doing system research, and she said a lot of current team members are all great programmers, and as a result, they are not really well-motivated [LAUGHS] by finding bugs in software!

HUIZINGA: Interesting.

LU: And then she said, you are really motivated, right, by, you know, getting help to developers, to help developers finding bugs in their software, so maybe that’s the research project for you. So that’s how I got started.

HUIZINGA: Well, let’s go a little bit further on this mentor and mentors in general. As Dr. Seuss might say, every “what” has a “who.” So by that I mean an inspirational person or people behind every successful researcher’s career. And most often, they’re kind of big names and meaningful relationships, but you have another unique story on who has influenced you in your career, so why don’t you tell us about the spectrum of people who’ve been influential in your life and your career?

LU: Mm-hmm. Yeah, I mean, I think I mentioned my adviser, and she’s just so supportive. And I remember, when I started doing research, I just felt like I seemed to be so far behind everyone else. You know, I felt like, how come everybody else knows how to ask, you know, insightful questions? And they, like, they know how to program really fast, bug free. And my adviser really encouraged me, saying, you know, there are background knowledge that you can pick up; you just need to be patient. But then there are also, like, you know how to do research, you know how to think about things, problem solving. And she encouraged me saying, Shan, you’re good at that!

HUIZINGA: Interesting!

LU: Well, I don’t know how she found out, and anyway, so she was super, super helpful.

HUIZINGA: OK, so go a little further on this because I know you have others that have influence you, as well.

LU: Yes. Yes, yes. And I think those, to be honest, I’m a very emotional, sensitive person. I would just, you know, move the timeline to be, kind of, more recent. So I joined Microsoft Research as a manager, and there’s something called Connect that, you know, people write down twice every year talking about what it is they’ve been doing. So I was just checking, you know, my members in my team to see what they have been doing over the years just to just get myself familiar with them. And I remember I read several of them. I felt like I almost have tears in my eyes! Like, I realized, wow, like … And just to give example, for Chris, Chris Hawblitzel, I read his Connect, and I saw that he’s working on something called program verification. It’s a very, very difficult problem, and [as an] outsider, you know, I’ve read many of his papers, but when I read, you know, his own writing, and I realized, wow, you know, it’s almost two decades, right. Like, he just keeps doing these very difficult things. And I read his words about, you know, how his old approach has problems, how he’s thinking about how to address that problem. Oh, I have an idea, right. And then spend multiple years to implement that idea and get improvement; find a new problem and then just find new solutions. And I really feel like, wow, I’m really, really, like, I feel like this is, kind of, like a, you know, there’s, how to say, a hero-ish story behind this, you know, this kind of goal, and you’re willing to spend many years to keep tackling this challenging problem. And I just feel like, wow, I’m so honored, you know, to be in the same group with a group of fighters, you know, determined to tackle difficult research problems.

HUIZINGA: Yeah. And I think when you talk about it, it’s like this is a person that was working for you, a direct report. [LAUGHTER] And often, we think about our heroes as being the ones who mentored us, who taught us, who managed us, but yours is kind of 360! It’s like …

LU: True!

HUIZINGA: … your heroes [are] above, beside and below.

LU: Right. And I would just say that I have many other, you know, direct reports in my group, and I have, you know, for example, say a couple other … my colleagues, my direct reports, Dan Ports and Jacob Nelson. And again, this is something like their story really inspired me. Like, they were, again, spent five or six years on something, and it looks like, oh, it’s close to the success of tech transfer, and then something out of their control happened. It happened because Intel decided to stop manufacturing a chip that their research relied on. And it’s, kind of, like the end of the world to them, …

HUIZINGA: Yeah.

LU: … and then they did not give up. And then, you know, like, one year later, they found a solution, you know, together with their product team collaborators.

HUIZINGA: Wow.

LU: And I still feel like, wow, you know, I feel so … I feel like I’m inspired every day! Like, I’m so happy to be working together with, you know, all these great people, great researchers in my team.

HUIZINGA: Yeah. Wow. So much of your work centers on this idea of concurrent systems and I want you to talk about some specific examples of this work next, but I think it warrants a little explication upfront for those people in the audience who don’t spend all their time working on concurrent systems themselves. So give us a short “101” on concurrent systems and explain why the work you do matters to both the people who make it and the people who use it.

LU: Sure. Yeah. So I think a lot of people may not realize … so actually, the software we’re using every day, almost every software we use these days are concurrent. So the meaning of concurrent is that you have multiple threads of execution going on at the same time, in parallel. And then, when we go to a web browser, right, so it’s not just one rendering that is going on. There are actually multiple concurrent renderings that is going on. So the problem of writing … for software developers to develop this type of concurrent system, a challenge is the timing. So because you have multiple concurrent things going on, it’s very difficult to manage and reason about, you know, what may happen first, what may happen second. And also, it’s, like, there’s an inherent non-determinism in it. What happened first this time may happen second next time. So as a result, a lot of bugs are introduced by this. And it was a very challenging problem because I would say about 20 years ago, there was a shift. Like, in the older days, actually most of our software is written in a sequential way instead of a concurrent way. So, you know, a lot of developers also have a difficult time to shift their mindset from the sequential way of reasoning to this concurrent way of reasoning.

HUIZINGA: Right. Well, and I think, from a user’s perspective, all you experience is what I like to call the spinning beachball of doom. It’s like I’ve asked something, and it doesn’t want to give, so [LAUGHS] … And this is, like, behind the scenes from a reasoning perspective of, how do we keep that from happening to our users? How do we identify the bugs? Which we’ll get to in a second. Umm. Thanks for that. Your research now revolves around what I would call the big idea of learning from mistakes. And in fact, it all seems to have started with a paper that you published way back in 2008 called “Learning from Mistakes: A Comprehensive Study on Real World Concurrency Bug Characteristics,” and you say this strongly influenced your research style and approach. And by the way, I’ll note that this paper received the Most Influential Paper Award in 2022 from ASPLOS, which is the Architectural Support for Programming Languages and Operating Systems. Huge mouthful. And it also has more than a thousand citations, so I dare say it’s influenced other researchers’ approach to research, as well. Talk about the big idea behind this paper and exactly how it informed your research style and approach today.

LU: Mm-hmm. Yeah. So I think this, like, again, went back to the days that I, you know, my PhD days, I started working with my adviser, you know, YY (Yuanyuan Zhou). So at that time, there had been a lot of people working on bug finding, but then now when I think about it, people just magically say, hey, I want to look at this type of bug. Just magically, oh, I want to look at that type of bug. And then, my adviser at that time suggested to me, saying, hey, maybe, you know, actually take a look, right. At that time, as I mentioned, software was kind of shifting from sequential software to concurrent software, and my adviser was saying, hey, just take a look at those real systems bug databases, and see what type of concurrency bugs are actually there. You know, instead of just randomly saying, oh, I want to work on this type of bug.

HUIZINGA: Oh, yeah.

LU: And then also, of course, it’s not just look at it. It’s not just like you read a novel or something, right. [LAUGHTER] And again, my adviser said, hey, Shan, right, you have this, you have a connection, natural connection, you know, with bugs and the developers who commit …

HUIZINGA: Who make them …

LU: Who make them! [LAUGHTER] So she said, you know, try to think about the patterns behind them, right. Try to think about whether you can generalize some …

HUIZINGA: Interesting …

LU: … characteristics, and use that to guide people’s research in this domain. And at that time, we were actually thinking we don’t know whether, you know, we can actually write a paper about it because traditionally you publish a paper, just say, oh, I have a new tool, right, which can do this and that. At that time in system conferences, people rarely have, you know, just say, here’s a study, right. But we studied that, and indeed, you know, I had this thought that, hey, why I make a lot of mistakes. And when I study a lot of bugs, the more and more, I feel, you know, there’s a reason behind it, right. It’s like I’m not the only dumb person in the world, right? [LAUGHTER] There’s a reason that, you know, there’s some part of this language is difficult to use, right, and there’s a certain type of concurrent reasoning, it’s just not natural to many people, right. So because of that, there are patterns behind these bugs. And so at that time, we were surprised that the paper was actually accepted. Because I’m just happy with the learning I get. But after this paper was accepted, in the next, I would say, many years, there are more and more people realize, hey, before we actually, you know, do bug-finding things, let’s first do a study, right, to understand, and then this paper was … yeah … I was very happy that it was cited many, many times.

HUIZINGA: Yeah. And then gets the most influential paper many years later.

LU: Many years later. Yes.

HUIZINGA: Yeah, I feel like there’s a lot of things going through my head right now, one of which is what AI is, is a pattern detector, and you were doing that before AI even came on the scene. Which goes to show you that humans are pretty good at pattern detection also. We might not do as fast as …

LU: True.

HUIZINGA: … as an AI but … so this idea of learning from mistakes is a broad theme. Another theme that I see coming through your papers and your work is persistence. [LAUGHTER] And you mentioned this about your team, right. I was like, these people are people who don’t give up. So we covered this idea in an Abstracts podcast recently talking about a paper which really brings this to light: “If at First You Don’t Succeed, Try, Try Again.” That’s the name of the paper. And we didn’t have time to discuss it in depth at the time because the Abstracts show is so quick. But we do now. So I’d like you to expand a little bit on this big idea of persistence and how large language models are not only changing the way programming and verification happens but also providing insights into detecting retry bugs.

LU: Yes. So I guess maybe I will, since you mentioned this persistence, you know, after that “Learning from Mistakes” paper—so that was in 2008—and in the next 10 years, a little bit more than 10 years, in terms of persistence, right, so we have continued, me and my students, my collaborators, we have continued working on, you know, finding concurrency bugs …

HUIZINGA: Yeah.

LU: … which is related to, kind of related to, why I’m here at Microsoft Research. And we keep doing it, doing it, and then I feel like a high point was that I had a collaboration with my now colleagues here, Madan Musuvathi and Suman Nath. So we built a tool to detect concurrency bugs, and after more than 15 years of effort on this, we were able to find more than 1,000 concurrency bugs. It was built in a tool called Torch that was deployed in the company, and it won the Best Paper Award at the top system conference, SOSP, and it was actually a bittersweet moment. This paper seems to, you know, put an end …

HUIZINGA: Oh, interesting!

LU: … to our research. And also some of the findings from that paper is that we used to do very sophisticated program analysis to reason about the timing. And in that paper, we realized actually, sometimes, if you’re a little bit fuzzy, don’t aim to do perfect analysis, the resulting tool is actually more effective. So after that paper, Madan, Suman, and me, we kind of, you know, shifted our focus to looking at other types of bugs. And at the same time, the three of us realized the traditional, very precise program analysis may not be needed for some of the bug finding. So then, for this paper, this retry bugs, after we shifted our focus away from concurrency bugs, we realized, oh, there are many other types of important bugs, such as, in this case, like retry, right, when your software goes wrong, right. Another thing we learned is that it looks like you can never eliminate all bugs, so something will go wrong, [LAUGHTER] and then so that’s why you need something like retry, right. So like if something goes wrong, at least you won’t give up immediately.

HUIZINGA: Right.

LU: The software will retry. And another thing that started from this earlier effort is we started using large language models because we realized, yeah, you know, traditional program analysis sometimes can give you a very strong guarantee, but in some other cases, like in this retry case, some kind of fuzzy analysis, you know, not so precise, offered by large language models is sometimes even more beneficial. Yeah. So that’s kind of, you know, the story behind this paper.

HUIZINGA: Yeah, yeah, yeah, yeah. So, Shan, we’re hearing a lot about how large language models are writing code nowadays. In fact, NVIDIA’s CEO says, mamas, don’t let your babies grow up to be coders because AI’s going to do that. I don’t know if he’s right, but one of the projects you’re most excited about right now is called Verus, and your colleague Jay Lorch recently said that he sees a lot of synergy between AI and verification, where each discipline brings something to the other, and Rafah Hosn has referred to this as “co-innovation” or “bidirectional enrichment.” I don’t know if that’s exactly what is going on here, but it seems like it is. Tell us more about this project, Verus, and how AI and software verification are helping each other out.

LU: Yes, yes, yes, yes. I’m very excited about this project now! So first of all, starting from Verus. So Verus is a tool that helps you verify the correctness of Rust code. So this is a … it’s a relatively new tool, but it’s creating a lot of, you know, excitement in the research community, and it’s created by my colleague Chris Hawblitzel and his collaborators outside Microsoft Research.

HUIZINGA: Interesting.

LU: And as I mentioned, right, this is a part that, you know, really inspired me. So traditionally to verify, right, your program is correct, it requires a lot of expertise. You actually have to write your proof typically in a special language. And, you know, so a lot of people, including me, right, who are so eager to get rid of bugs in my software, but there are people told me, saying just to learn that language—so they were referring to a language called Coq—just to learn that language, they said it takes one or two years. And then once you learn that language, right, then you have to learn about how to write proofs in that special language. So people, particularly in the bug-finding community, people know that, oh, in theory, you can verify it, but in reality, people don’t do that. OK, so now going back to this Verus tool, why it’s exciting … so it actually allows people to write proofs in Rust. So Rust is an increasingly popular language. And there are more and more people picking up Rust. It’s the first time I heard about, oh, you can, you know, write proofs in a popular language. And also, another thing is in the past, you cannot verify an implementation directly. You can only verify something written in a special language. And the proof is proving something that is in a special language. And then finally, that special language is maybe then transformed into an implementation. So it’s just, there’s just too many special languages there.

HUIZINGA: A lot of layers.

LU: A lot of layers. So now this Verus tool allows you to write a proof in Rust to prove an implementation that is in Rust. So it’s very direct. I just feel like I’m just not good at learning a new language.

HUIZINGA: Interesting.

LU: So when I came here, you know, and learned about this Verus tool, you know, by Chris and his collaborators, I feel like, oh, looks like maybe I can give it a try. And surprisingly, I realized, oh, wow! I can actually write proofs using this Verus tool.

HUIZINGA: Right.

LU: And then, of course, you know, I was told, if you really want to, right, write proofs for large systems, it still takes a lot of effort. And then this idea came to me that, hey, maybe, you know, these days, like, large language models can write code, then why not let large language models write proofs, right? And of course, you know, other people actually had this idea, as well, but there’s a doubt that, you know, can large language models really write proofs, right? And also, people have this feeling that, you know, large language models seem not very disciplined, you know, by nature. But, you know, that’s what intrigued me, right. And also, I used to be a doubter for, say, GitHub Copilot. USED to! Because I feel like, yes, it can generate a lot of code, but who knows [LAUGHS] …

HUIZINGA: Whether it’s right …

LU: What, what is … whether it’s right?

HUIZINGA: Yeah.

LU: Right, so I feel like, wow, you know, this could be a game-changer, right? Like, if AI can write not only code but also proofs. Yeah, so that’s what I have been doing. I’ve been working on this for one year, and I gradually get more collaborators both, you know, people in Microsoft Research Asia, and, you know, expertise here, like Chris, and Jay Lorch. They all help me a lot. So we actually have made a lot of progress.

HUIZINGA: Yeah.

LU: Like, now it’s, like, we’ve tried, like, for example, for some small programs, benchmarks, and we see that actually large language models can correctly prove the majority of the benchmarks that we throw to it. Yeah. It’s very, very exciting.

HUIZINGA: Well, and so … and we’re going to talk a little bit more about some of those doubts and some of those interesting concerns in a bit. I do want you to address what I think Jay was getting at, which is that somehow the two help each other. The verification improves the AI. The AI improves the verification.

LU: Yes, yes.

HUIZINGA: How?

LU: Yes. My feeling is that a lot of people, if they’re concerned with using AI, it’s because they feel like there’s no guarantee for the content generated by AI, right. And then we also all heard about, you know, hallucination. And I tried myself. Like, I remember, at some point, if I ask AI, say, you know, which is bigger: is it three times three or eight? And the AI will tell me eight is bigger. And … [LAUGHTER]

HUIZINGA: Like, what?

LU: So I feel like verification can really help AI …

HUIZINGA: Get better …

LU: … because now you can give, you know, kind of, add in mathematical rigors into whatever that is generated by AI, right. And I say it would help AI. It will also help people who use AI, right, so that they know what can be trusted, right.

HUIZINGA: Right.

LU: What is guaranteed by this content generated by AI?

HUIZINGA: Yeah, yeah, yeah.

LU: Yeah, and now of course AI can help verification because, you know, verification, you know, it’s hard. There is a lot of mathematical reasoning behind it. [LAUGHS] And so now with AI, it will enable verification to be picked up by more and more developers so that we can get higher-quality software.

HUIZINGA: Yeah.

LU: Yeah.

HUIZINGA: Yeah. And we’ll get to that, too, about what I would call the democratization of things. But before that, I want to, again, say an observation that I had based on your work and my conversations with you is that you’ve basically dedicated your career to hunting bugs.

LU: Yes.

HUIZINGA: And maybe that’s partly due to a personal story about how a tiny mistake became a bug that haunted you for years. Tell us the story.

LU: Yes.

HUIZINGA: And explain why and how it launched a lifelong quest to understand, detect, and expose bugs of all kinds.

LU: Yes. So before I came here, I already had multiple times, you know, interacting with Microsoft Research. So I was a summer intern at Microsoft Research Redmond almost 20 years ago.

HUIZINGA: Oh, wow!

LU: I think it was in the summer of 2005. And I remember I came here, you know, full of ambition. And I thought, OK, you know, I will implement some smart algorithm. I will deliver some useful tools. So at that time, I had just finished two years of my PhD, so I, kind of, just started my research on bug finding and so on. And I remember I came here, and I was told that I need to program in C#. And, you know, I just naturally have a fear of learning a new language. But anyway, I remember, I thought, oh, the task I was assigned was very straightforward. And I think I went ahead of myself. I was thinking, oh, I want to quickly finish this, and I want to do something more novel, you know, that can be more creative. But then this simple task I was assigned, I ended up spending the whole summer on it. So the tool that I wrote was supposed to process very huge logs. And then the problem is my software is, like, you run it initially … So, like, I can only run it for 10 minutes because my software used so much memory and it will crash. And then, I spent a lot of time … I was thinking, oh, my software is just using too much memory. Let me optimize it, right. And then so, I, you know, I try to make sure to use memory in a very efficient way, but then as a result, instead of crashing every 10 minutes, it will just crash after one hour. And I know there’s a bug at that time. So there’s a type of bug called memory leak. I know there’s a bug in my code, and I spent a lot of time and there was an engineer helping me checking my code. We spent a lot of time. We were just not able to find that bug. And at the end, we … the solution is I was just sitting in front of my computer waiting for my program to crash and restart. [LAUGHTER] And at that time, because there was very little remote working option, so in order to finish processing all those logs, it’s like, you know, after dinner, I …

HUIZINGA: You have to stay all night!

LU: I have to stay all night! And all my intern friends, they were saying, oh, Shan, you work really hard! And I’m just feeling like, you know what I’m doing is just sitting in front of my computer waiting [LAUGHTER] for my program to crash so that I can restart it! And near the end of my internship, I finally find the bug. It turns out that I missed a pair of brackets in one line of code.

HUIZINGA: That’s it.

LU: That’s it.

HUIZINGA: Oh, my goodness.

LU: And it turns out, because I was used to C, and in C, when you want to free, which means deallocate, an array, you just say “free array.” And if I remember correctly, in this language, C#, you have to say, “free this array name” and you put a bracket behind it. Otherwise, it will only free the first element. And I … it was a nightmare. And I also felt like, the most frustrating thing is, if it’s a clever bug, right … [LAUGHS]

HUIZINGA: Sure.

LU: … then you feel like at least I’m defeated by something complicated …

HUIZINGA: Smart.

LU: Something smart. And then it’s like, you know, also all this ambition I had about, you know, doing creative work, right, with all these smart researchers in MSR (Microsoft Research), I feel like I ended up achieving very little in my summer internship.

HUIZINGA: But maybe the humility of making a stupid mistake is the kind of thing that somebody who’s good at hunting bugs … It’s like missing an error in the headline of an article, because the print is so big [LAUGHTER] that you’re looking for the little things in the … I know that’s a journalist’s problem. Actually, I actually love that story. And it, kind of, presents a big picture of you, Shan, as a person who has a realistic, self-awareness of … and humility, which I think is rare at times in the software world. So thanks for sharing that. So moving on. When we talked before, you mentioned the large variety of programming languages and how that can be a barrier to entry or at least a big hurdle to overcome in software programming and verification. But you also talked about, as we just mentioned, how LLMs have been a democratizing force …

LU: Yes.

HUIZINGA: in this field. So going back to when you first started …

LU: Yes.

HUIZINGA: … and what you see now with the advent of tools like GitHub Copilot, …

LU: Yes.

HUIZINGA: … what … what’s changed?

LU: Oh, so much has changed. Well, I don’t even know how to start. Like, I used to be really scared about programming. You know, when I tell this story, a lot of people say, no, I don’t believe you. And I feel like it’s a trauma, you know.

HUIZINGA: Sure.

LU: I almost feel like it’s like, you know, the college-day me, right, who was scared of starting any programming project. Somehow, I felt humiliated when asking those very, I feel like, stupid questions to my classmates. It almost changed my personality! It’s like … for a long time, whenever someone introduced me to a new software tool, my first reaction is, uh, I probably will not be able to successfully even install it. Like whenever, you know, there’s a new language, my first reaction is, uh, no, I’m not good at it. And then, like, for example, this GitHub Copilot thing, actually, I did not try it until I joined Microsoft. And then I, actually, I haven’t programmed for a long time. And then I started collaborating with people in Microsoft Research Asia, and he writes programs in Python, right. And I have never written a single line of Python code before. And also, this Verus tool. It helps you to verify code in Rust, but I have never learned Rust before. So I thought, OK, maybe let me just try GitHub Copilot. And wow! You know, it’s like I realized, wow! Like … [LAUGHS]

HUIZINGA: I can do this!

LU: I can do this! And, of course, sometimes I feel like my colleagues may sometimes be surprised because on one hand it looks like I’m able to just finish, you know, write a Rust function. But on some other days, I ask very basic questions, [LAUGHTER] and I have those questions because, you know, the GitHub Copilot just helps me finish! [LAUGHS]

HUIZINGA: Right.

LU: You know, I’m just starting something to start it, and then it just helps me finish. And I wish, when I started my college, if at that time there was GitHub Copilot, I feel like, you know, my mindset towards programming and towards computer science might be different. So it does make me feel very positive, you know, about, you know, what future we have, you know, with AI, with computer science.

HUIZINGA: OK, usually, I ask researchers at this time, what could possibly go wrong if you got everything right? And I was thinking about this question in a different way until just this minute. I want to ask you … what do you think that it means to have a tool that can do things for you that you don’t have to struggle with? And maybe, is there anything good about the struggle? Because you’re framing it as it sapped your confidence.

LU: [LAUGHS] Yes.

HUIZINGA: And at the same time, I see a woman who emerged stronger because of this struggle with an amazing career, a huge list of publications, influential papers, citations, leadership role. [LAUGHTER] So in light of that …

LU: Right.

HUIZINGA: … what do you see as the tension between struggling to learn a new language versus having this tool that can just do it that makes you look amazing? And maybe the truth of it is you don’t know!

LU: Yeah. That’s a very good point. I guess you need some kind of balance. And on one hand, yes, I feel like, again, right, this goes back to like my internship. I left with the frustration that I felt like I have so much creativity to contribute, and yet I could not because of this language barrier. You know, I feel positive in the sense that just from GitHub Copilot, right, how it has enabled me to just bravely try something new. I feel like this goes beyond just computer science, right. I can imagine it’ll help people to truly unleash their creativity, not being bothered by some challenges in learning the tool. But on the other hand, you made a very good point. My adviser told me she feels like, you know, I write code slowly, but I tend to make fewer mistakes. And the difficulty of learning, right, and all these nightmares I had definitely made me more … more cautious? I pay more respect to the task that is given to me, so there is definitely the other side of AI, right, which is, you feel like everything is easy and maybe you do not have the experience of those bugs, right, that a software can bring to you and you have overreliance, right, on this tool.

HUIZINGA: Yeah!

LU: So hopefully, you know, some of the things we we’re doing now, right, like for example, say verification, right, like bringing this mathematical rigor to AI, hopefully that can help.

HUIZINGA: Yeah. You know, even as you unpack the nuances there, it strikes me that both are good. Both having to struggle and learning languages and understanding …

LU: Yeah.

HUIZINGA: … the core of it and the idea that in natural language, you could just say, here’s what I want to happen, and the AI does the code, the verification, etc. That said, do we trust it? And this was where I was going with the first “what could possibly go wrong?” question. How do we know that it is really as clever as it appears to be? [LAUGHS]

LU: Yeah, I think I would just use the research problem we are working on now, right. Like, I think on one hand, I can use AI to generate a proof, right, to prove the code generated by AI is correct. But having said that, even if we’re wildly successful, you know, in this thing, human beings’ expertise is still needed because just take this as an example. What do you mean by “correct,” right?

HUIZINGA: Sure.

LU: And so someone first has to define what correctness means. And then so far, the experience shows that you can’t just define it using natural language because our natural language is inherently imprecise.

HUIZINGA: Sure.

LU: So you still need to translate it to a formal specification in a programming language. It could be in a popular language like in Rust, right, which is what Verus is aiming at. And then we are, like, for example, some of the research we do is showing that, yes, you know, I can also use AI to do this translation from natural language to specification. But again, then, who to verify that, right? So at the end of the day, I think we still do need to have humans in the loop. But what we can do is to lower the burden and make the interface not so complicated, right. So that it’ll be easy for human beings to check what AI has been doing.

HUIZINGA: Yeah. You know, everything we’re talking about just reinforces this idea that we’re living in a time where the advances in computer science that seemed unrealistic or impossible, unattainable even a few years ago are now so common that we take it for granted. And they don’t even seem outrageous, but they are. So I’m interested to know what, if anything, you would classify now as “blue sky” research in your field. Maybe something in systems research today that looks like a moonshot. You’ve actually anchored this in the fact that you, kind of, have, you know, blinders on for the work you’re doing—head down in the in the work you’re doing—but even as you peek up from the work that might be outrageous, is there anything else? I just like to get this out there that, you know, what’s going on 10 years down the line?

LU: You know, sometimes I feel like I’m just now so much into my own work, but, you know, occasionally, like, say, when I had a chat with my daughter and I explained to her, you know, oh, I’m working on, you know, not only having AI to generate code but also having AI to prove, right, the code is correct. And she would feel, wow, that sounds amazing! [LAUGHS] So I don’t know whether that is, you know, a moonshot thing, but that’s a thing that I’m super excited about …

HUIZINGA: Yeah.

LU: … about the potential. And then there also have, you know, my colleagues, we spend a lot of time building systems, and it’s not just about correctness, right. Like, the verification thing I’m doing now is related to automatically verify it’s correct. But also, you need to do a lot of performance tuning, right. Just so that your system can react fast, right. It can have good utilization of computer resources. And my colleagues are also working on using AI, right, to automatically do performance tuning. And I know what they are doing, so I don’t particularly feel that’s a moonshot, but I guess …

HUIZINGA: I feel like, because you are so immersed, [LAUGHTER] that you just don’t see how much we think …

LU: Yeah!

HUIZINGA: … it’s amazing. Well, I’m just delighted to talk to you today, Shan. As we close … and you’ve sort of just done a little vision casting, but let’s take your daughter, my daughter, [LAUGHTER] all of our daughters …

LU: Yes!

HUIZINGA: How does what we believe about the future in terms of these things that we could accomplish influence the work we do today as sort of a vision casting for the next “Shan Lu” who’s struggling in undergrad/grad school?

LU: Yes, yes, yes. Oh, thank you for asking that question. Yeah, I have to say, you know, I think we’re in a very interesting time, right, with all this AI thing.

HUIZINGA: Isn’t that a curse in China? “May you live in interesting times!”

LU: And I think there were times, actually, you know, before I myself fully embraced AI, I was … indeed I had my daughter in mind. I was worried when she grows up, what would happen? There will be no job for her because everything will be done by AI!

HUIZINGA: Oh, interesting.

LU: But then now, now that I have, you know, kind of fully embraced AI myself, actually, I see this more and more positive. Like you said, I remember, you know, those older days myself, right. That is really, like, I have this struggle that I feel like I can do better. I feel like I have ideas to contribute, but just for whatever reason, right, it took me forever to learn something which I feel like it’s a very mechanical thing, but it just takes me forever to learn, right. And then now actually, I see this hope, right, with AI, you know, a lot of mechanical things that can actually now be done in a much more automated way by AI, right. So then now truly, you know, my daughter, many girls, many kids out there, right, whatever you know, they are good at, their creativity, it’ll be much easier, right, for them to contribute their creativity to whatever discipline they are passionate about. Hopefully, they don’t have to, you know, go through what I went through, right, to finally be able to contribute. But then, of course, you know, at the same time, I do feel this responsibility of me, my colleagues, MSR, we have the capability and also the responsibility, right, of building AI tools in a responsible way so that it will be used in a positive way by the next generation.

HUIZINGA: Yeah. Shan Lu, thank you so much for coming on the show today. [MUSIC] It’s been absolutely delightful, instructive, informative, wonderful.

LU: Thank you. My pleasure.

The post Ideas: Bug hunting with Shan Lu appeared first on Microsoft Research.

Read More