Facebook is a place where bright minds in computer science come to work on some of the world’s most complex and challenging research problems. In addition to recruiting top talent, we maintain close ties with academia and the research community to collaborate on difficult challenges and find solutions together. In this new monthly interview series, we turn the spotlight on members of the academic community and the important research they do — as partners, collaborators, consultants, or independent contributors.
This month, we reached out to Ayesha Ali, professor at Lahore University of Management Sciences (LUMS) in Pakistan. Ali is a two-time winner of the Facebook Foundational Integrity Research request for proposals (RFP) in misinformation and polarization (2019 and 2020). In this Q&A, Ali shares the results of her research, its impact, and advice for university faculty looking to follow a similar path.
Q: Tell us about your role at LUMS and the type of research you and your department specialize in.
Ayesha Ali: I joined the Department of Economics at LUMS in 2016 as an assistant professor, after completing my PhD in economics at the University of Toronto. I am trained as an applied development economist, and my research focuses on understanding and addressing policy challenges facing developing countries, such as increasing human development, managing energy and environment, and leveraging technology for societal benefit. Among the themes that I am working on is how individuals with low levels of digital literacy perceive and react to content on social media, and how that affects their beliefs and behavior.
Q: How did you decide to pursue research projects in misinformation?
AA: Before writing the first proposal back in 2018, I had been thinking about the phenomenon of misinformation and fabricated content for quite some time. On multiple occasions, I had the opportunity to interact with colleagues in the computer science department on this issue, and we had some great discussions about it.
We quickly realized that we cannot combat misinformation with technology alone. It is a multifaceted issue. To address this problem, we need the following: user education, technology for filtering false news, and context-specific policies for deterring false news generation and dissemination. We were particularly interested in thinking about the different ways we could educate people who have low levels of digital literacy to recognize misinformation.
Q: What were the results of your first research project, and what are your plans for the second one?
AA: In our first project, we studied the effect of two types of user education programs in helping people recognize false news using a randomized field experiment. Using a list of actual news stories circulated on social media, we create a test to measure the extent to which people are likely to believe misinformation. Contrary to their perceived effectiveness, we found no significant effect of video-based general educational messages about misinformation.
However, when video-based educational messages were augmented with personalized feedback based on individuals’ past engagement with false news, there was a significant improvement in their ability to recognize false news. Our results show that, when appropriately designed, educational programs can be effective in making people more discerning consumers of information on social media.
Our second project aims to build on this research agenda. We plan to focus on nontextual misinformation, such as audio deepfakes. Audio messages are a popular form of communication among people with low levels of literacy and digital literacy. Using surveys and experiments, we will examine how people perceive, consume, and engage with information received via audio deepfakes, and what is the role of prior beliefs and analytical ability in forming perceptions about the accuracy of such information. We also plan to design and experimentally evaluate an educational intervention to increase people’s ability to identify audio deepfakes.
Q: What is the impact of your research in your region and globally?
AA: I think there are at least three ways in which our work is having an impact:
- Our work raises awareness about the importance of digital literacy campaigns in combating misinformation. It shows that such interventions hold promise in making users more discerning consumers of information if they are tailored to the target population (e.g., low literacy populations).
- Our work can affect policy about media literacy campaigns and how to structure them, especially for low digital literacy populations. We are already in touch with various organizations in Pakistan to see how our findings can be put to use in various digital literacy campaigns. For example, the COVID-19 vaccination is likely to be made available in the coming months, and there is a need to raise awareness about its importance and to proactively dispel any conspiracy theories and misinformation about them. Past experiences with polio vaccination campaigns have shown that conspiracy theories can take strong root and even endanger human lives.
- We hope that work will motivate others to work on such global societal challenges, especially in developing countries.
Q: What advice would you give to academics looking to get their research funded?
AA: I think there are three ingredients in a good research proposal:
- It tackles an important problem that ideally has contextual/local relevance.
- It demonstrates a well-motivated solution or a plan that has contextual/local relevance.
- It shows or at least makes the case for why you are uniquely placed to solve it well.
Q: Where can people learn more about your research?
AA: They can learn about my research on my webpage.
The post Q&A with Ayesha Ali, two-time award winner of Facebook request for research proposals in misinformation appeared first on Facebook Research.