AI and Persuasion

Bai, H., Voelkel, J. G., Muldowney, S., Eichstaedt, J. C., & Willer, R.
Nature Communications
The emergence of large language models (LLMs) has made it possible for generative artificial intelligence (AI) to tackle many higher-order cognitive tasks, with critical implications for industry, government, and labor markets. Here, we investigate whether existing, openly available LLMs can be used to create messages capable of influencing humans’ political attitudes. Across three pre-registered experiments (total N = 4829), participants who read persuasive messages generated by LLMs showed significantly more attitude change across a range of policies—including polarized policies, like an assault weapons ban, a carbon tax, and a paid parental-leave program—relative to control condition participants who read a neutral message. Overall, LLM-generated messages were similarly effective in influencing policy attitudes as messages crafted by lay humans. Participants’ reported perceptions of the authors of the persuasive messages suggest these effects occurred through somewhat distinct causal pathways. While the persuasiveness of LLM-generated messages was associated with perceptions that the author used more facts, evidence, logical reasoning, and a dispassionate voice, the persuasiveness of human-generated messages was associated with perceptions of the author as unique and original. These results demonstrate that recent developments in AI make it possible to create politically persuasive messages quickly, cheaply, and at massive scale.

Costello, T.H., Pennycook, G., Willer, R., & Rand, D.
Working Paper
“Deep canvassing”—extended, emotionally resonant conversations encouraging perspective-taking—durably reduces exclusionary attitudes but is resource-intensive. We test whether deep canvassing can be successfully performed by AI. N = 1,111 U.S. adults were randomized to have an AI-led deep canvassing conversation about immigration (treatment) or a neutral topic (control). Immediately after the conversation, the treatment significantly reduced anti-immigrant prejudice (d = −0.13) and increased pro-immigration policy support (d = 0.15) relative to the control. Five weeks later, in a survey fielded amidst the highly polarizing 2024 U.S. election, these effects endured at 30–40% of their initial size. Thus we find that AI can implement deep canvassing, producing durable and meaningful prejudice reduction with the potential to change minds at scale.

Gallegos, I. O., Shani, C., Shi, W., Bianchi, F., Gainsburg, I., Jurafsky, D., & Willer, R.
Working Paper
As generative artificial intelligence (AI) enables the creation and dissemination of information at massive scale and speed, it is increasingly important to understand how people perceive AI-generated content. One prominent policy proposal requires explicitly labeling AI-generated content to increase transparency and encourage critical thinking about the information, but prior research has not yet tested the effects of such labels. To address this gap, we conducted a survey experiment (N = 1,601) on a diverse sample of Americans, presenting participants with an AI-generated message about several public policies (e.g., allowing colleges to pay student-athletes), randomly assigning whether participants were told the message was generated by (a) an expert AI model, (b) a human policy expert, or (c) no label. We found that messages were generally persuasive, influencing participants’ views of the policies by 9.74 percentage points on average. However, while 94.6% of participants assigned to the AI and human label conditions believed the authorship labels, labels had no significant effects on participants’ attitude change toward the policies, judgments of message accuracy, nor intentions to share the message with others. These patterns were robust across a variety of participant characteristics, including prior knowledge of the policy, prior experience with AI, political party, education level, or age. Taken together, these results imply that, while authorship labels would likely enhance transparency, they are unlikely to substantially affect the persuasiveness of the labeled content, highlighting the need for alternative strategies to address challenges posed by AI-generated information.