Events

Current and Upcoming

Media Effects Empirical Workshop (MEEW)

May 3, 2024
10:00 AM - 6:00 PM
Event time is displayed in your time zone.
Lindsey Rogers Library Room 7th Floor, International Affairs Building

Annual workshop featuring latest research in political communication across subfields

Co-hosts: Eunji Kim and Tamar Mitts

Preliminary Schedule:

Breakfast and registration 10:00am - 10:30am

 

Session I: 10:30am - 12:00pm

Bridging the Digital Divide: Data Access and Integration of Venezuelan Migrants in Colombia

Presenter: Nejla Asimovic [Penn/Georgetown McCourt] (co-authors: Kevin Munger, Mateo Vásquez Cortés)

Discussant: Tamar Mitts [Columbia]

 

Co-optation and Coercion of Online Influencers: Evidence from Saudi Wikipedia

Presenter: Alexandra Siegel [Colorado Boulder] 

Discussant: Xu Xu [Princeton]

 

Lunch 12:00pm-1:00pm

 

Session II:  1:00pm - 2:30pm

Harnessing Partisan Motives to Solve the Misinformation Problem

Presenter: Jennifer Allen [MIT Sloan/NYU Stern] (co-authors: Cameron Martel, David Rand)

Discussant: Andy Guess [Princeton]

 

Paying Attention

Presenter: Karthik Srinivasan [Chicago Booth]

Discussant: Suresh Naidu [Columbia Econ]

 

Session III 2:45pm - 4:15pm

Copaganda: Entertainment Media Origins of Policing Attitudes

Presenter: Tyler Reny [Claremont]  (co-authors: Eunji Kim, Esteban Manuel Fernandez)

Discussant: Patrick Egan [NYU]

 

Speaking with the State's Voice: The Decade-Long Growth of Government-Authored News Media in China under Xi Jinping

Presenter: Brandon M. Stewart [Princeton] (co-authors: Hannah Waight, Yin Yuan, Margaret E. Roberts)

Discussant: Dan Mattingly [Yale]

 

4:30pm-6:00pm Happy Hour

 

Abstracts:

Bridging the Digital Divide: Data Access and Integration of Venezuelan Migrants in Colombia
The crisis in Venezuela has forced nearly two million people to seek refuge in Colombia, creating significant challenges for both the displaced individuals and the Colombian government. A notable hurdle is the limited internet access that impedes the acquisition of crucial information on government programs, economic opportunities, and social networks. In collaboration with Innovations for Poverty Action Colombia and the National Planning Department of Colombia, our study aims to assess the impact of enhanced data access on the lives of Venezuelan migrants in Colombia. Specifically, we seek to measure how improved data access influences their awareness of assistance programs, trust in the government, success in the job market, and overall well-being. To achieve this, we provide mobile data credits to a selected sample of Venezuelan migrants in Colombia who currently lack internet access. Within this sample, a subgroup receives WhatsApp messages directly from a moderator trained by Colombian government officials, with the delivery method varying – some participants receive messages within WhatsApp groups, fostering networking among themselves, while others receive messages directly from the moderator. These messages offer information about available social programs and actively encourage enrollment on an online portal. By analyzing the impact of this intervention through attitudinal and behavioral data, we aim to gain valuable insights that can inform policies to strengthen connections between migrants and host countries. Furthermore, we seek to leverage the widespread use of WhatsApp as a means to enhance public service delivery.


Co-optation and Coercion of Online Influencers: Evidence from Saudi Wikipedia
How do authoritarian regimes use co-optation and coercion of influential internet users to control online information? This paper explores how the Saudi regime co-opted prominent Wikipedia administrators to alter content on sensitive domestic and foreign political topics. I argue that the co-optation and coercion of influential social media users offers regimes an effective tool to manipulate online information environments, with greater plausible deniability and better evasion of content moderation than other forms of computational propaganda. Drawing on a recent ban of Saudi Wikipedia users for coordinated inauthentic activity, I use a two-way fixed effects design and quantitative text analysis of Wikipedia edits to evaluate how banned users’ behavior compares to the activity of non-banned users before and after their reported co-optation. I find that Saudi co-optation led to increased editing of pages referencing sensitive political topics, particularly during moments of crisis. This work contributes to our understanding of how authoritarian regimes have adapted longstanding strategies of co-optation, coercion, and information control in the digital age.


Harnessing Partisan Motives to Solve the Misinformation Problem
Partisan motives have been thought of as fundamentally in opposition to accuracy-directed motives in the context of misinformation. Politically motivated reasoning may even undermine “wisdom of crowds” approaches for identifying misinformation – an otherwise potentially effective and scalable strategy for reducing the belief and spread of online falsehoods. However, this may not be the case – rather than being opposed, political and accuracy motivations may operate independently and, in some cases, work in tandem to benefit crowdsourced fact-checking systems. Here, we test such a framework. We first develop a simple formal model of misinformation flagging where accuracy and partisan motives independently promote reporting of inaccurate and counter-partisan content, respectively. This model predicts that partisan motivation can help drive overall flagging discernment by increasing reporting of news that is both false and politically discordant. To empirically assess this prediction, we carried out a survey study and analyzed field data from Twitter’s crowdsourced fact-checking platform Community Notes. These data show that more politically motivated individuals are more active community fact-checking participants, helping sustain overall contribution levels. Furthermore, our results align with our simple model predictions – politically motivated participants engage in more politically biased flagging yet exhibit the same or better flagging discernment. Our results challenge the notion that partisan motives inherently undermine truth-seeking behavior – rather, political motivation may help, rather than hinder, crowdsourced content review via the provision of high quantity and quality fact-checks.


Paying Attention
Humans are social animals. Is the desire for attention from other people a quantitatively important non-monetary incentive? I consider this question in the context of social media, where platforms like Reddit and TikTok successfully attract a large volume of user-generated content without offering financial incentives to most users. Using data on two billion Reddit posts and a new sample of TikTok posts, I estimate the elasticity of content production with respect to attention, as measured by the number of likes and comments that a post receives. I isolate plausibly exogenous variation in attention by studying posts that go viral. After going viral, producers more than double their rate of content production for a month. I complement these reduced form estimates with a large-scale field experiment on Reddit. I randomly allocate attention by adding comments to posts. I use generative AI to produce responsive comments in real time, and distribute these comments via a network of bots. Adding comments increases production, though treatment efficacy depends on comment quality. Across empirical approaches, the attention labor supply curve is concave: producers value initial units of attention highly, but the marginal value of attention rapidly diminishes. Motivated by this fact, I propose a model of a social media platform which manages a two-sided market composed of content producers and consumers. The key trade-off is that consumers dislike low-quality content, but including low-quality content provides attention to producers, which boosts the supply of high-quality content in equilibrium. If the attention labor supply curve is sufficiently concave, then the platform includes some low-quality content, though a social planner would include even more.
 

Copaganda: Entertainment Media Origins of Policing Attitudes
Despite widespread evidence of police misconduct, most Americans continue to hold very favorable views toward law enforcement and the criminal justice system. Why? One potential explanation is the widespread consumption of police procedural television programs that tend to feature one dominant narrative: police officers are heroic figures who single-mindedly pursue justice, quell violent crime, protect the public from predators, and root out corruption in their own ranks. Using a variety of national surveys and Nielsen data, we first establish the robust correlation between exposure to these shows and attitudes about policing. We then use survey experiments that include a patient preference trial and field experiments in retirement houses in New Jersey and New York to probe the causal impact of entertainment media and potential mechanisms.
 

Speaking with the State's Voice: The Decade-Long Growth of Government-Authored News Media in China under Xi Jinping
Autocratic governments around the world use clandestine propaganda campaigns to influence the media. We document a decade-long trend in China towards the planting of government-authored articles in party and commercial newspapers. To examine this phenomenon, we develop a new approach to identifying scripted propaganda—the coerced reprinting of lightly-adapted government-authored articles in newspapers—that leverages the footprints left by the government when making media interventions. We show that in China, scripted propaganda is a daily phenomenon—on 90% of days from 2012-2022, the majority of party newspapers include at least some scripted propaganda at the direction of a central directive. On particular sensitive days, the amount of scripted propaganda can spike to 30% of the articles appearing in major newspapers. We show that scripted propaganda has strengthened under President Xi Jinping. In the last decade, the front page of party newspapers has evolved from 5% scripted articles to nearly 20% scripted. This government-authored content throughout the paper is increasingly homogeneous—fewer and fewer adaptations are done by individual newspapers. In contrast to popular speculation, we show that scripted content is not only on ideological topics (although it is increasingly ideological) and is also very prevalent in commercial papers. Using a case study of domestic coverage of COVID-19, we demonstrate how the regime uses scripting to shape, constrain, and delay information during crises. Our findings reveal the wide-ranging influence of government-authored propaganda in the Chinese media ecosystem.