Ozan Kuru – MediaShift http://mediashift.org Your Guide to the Digital Media Revolution Tue, 18 Feb 2025 19:12:42 +0000 en-US hourly 1 112695528 Why Twitter Polls Should Have a Warning Label http://mediashift.org/2018/01/twitter-add-warning-label-polling-feature/ http://mediashift.org/2018/01/twitter-add-warning-label-polling-feature/#comments Wed, 10 Jan 2018 11:05:31 +0000 http://mediashift.org/?p=149511 “If you want the public’s opinion on anything — what to name your dog, who will win tonight’s game, which election issue people care most about — there’s no better place to get answers than on Twitter.” This is how Twitter introduces its “Twitter Polls” feature. Twitter polls might be useful for entertainment and business, […]

The post Why Twitter Polls Should Have a Warning Label appeared first on MediaShift.

]]>
“If you want the public’s opinion on anything — what to name your dog, who will win tonight’s game, which election issue people care most about — there’s no better place to get answers than on Twitter.”

This is how Twitter introduces its “Twitter Polls” feature. Twitter polls might be useful for entertainment and business, but when it comes to politics, it’s more complicated: Twitter polls are not scientific; they are not systematically conducted and therefore cannot represent public opinion. Yet surprisingly, many individuals – ordinary citizens, public officials and political leaders – treat Twitter polls as valid representation of public opinion. Whether they fail to recognize its unscientific nature or intentionally use it as a pseudo-scientific platform for promoting their views, the result is increased cacophony, misinformation and polarization in social media and beyond. Given these problems, Twitter should update its design by adding an interactive warning label, at least for politically relevant polls.

Taking Twitter Polls Seriously

Twitter polls have not been systematically studied so far, but I believe there is reason to be concerned. A cursory search of the keywords “Twitter polls” in Twitter shows countless political polls posted by ordinary citizens. Most of these posts promote users’ partisan views by getting and disseminating favorable poll results — unsurprising given that users have mostly like-minded followers on Twitter. For example, ProgressPolls is a Twitter account with 127,000 followers. ProgressPolls regularly posts polls on a range of political issues, prefaced with leading questions (see example below).

Such Twitter polls may seem harmless, but as my colleagues’ and my work shows, people give more credibility to favorable poll results, whether or not the poll is scientific. We can expect individuals to engage in such biased processing even more actively in the echo chambers of social media, where people vote, comment, re-tweet, and are exposed to Twitter poll results.

Research by various scholars, such as Tremayne and Dunwoody and Sundar and Kim, demonstrate that online user interactivity increases persuasion. In other words, the hands-on nature of Twitter polls provides more psychological involvement, and could further amplify people’s biases.

When public officials don’t get it

More worrying still is that some public officials use Twitter polls and claim that they are legitimate. President Trump tweeted a poll showing what he regarded as favorable presidential approval ratings and ignored what the systematic, traditional polls showed (he has attacked traditional polls as being “rigged”). Another example of official misuse of Twitter polls came from a UK police department in November, which was considering whether to use a controversial restraint device called a “spit hood” in arrest procedures. The Durham Constabulary set up a poll asking whether followers were in favor of the possibility. A Durham police spokeswoman told the Guardian, “We have a huge social media following and so it seems fitting that we ask for public opinion. A poll provides measurable results which can help to shape decisions.” The problem, of course, is that Twitter polls do not provide any such thing.

The credibility of traditional polls suffers as well. The ease with which users can manipulate Twitter polls —not to mention the appropriation of the term “poll” for this superficial gauging of public opinion — may lead individuals to question the validity of polling in general.

A warning in the age of the self-polling public

If any Twitter users are taking Twitter polls seriously, then journalists, academics, and social media companies need to take them seriously too.

Fortunately, there are already tools available for this: First, there is community fact-checking; ordinary social media users sometimes comment on Twitter polls highlighting their methodological problems. Second, journalists and pollsters have intervened to highlight how the pitfalls of Twitter polls, and should continue to do so.

But these expert corrections reach only a limited audience. Also, as research with my collaborators shows, expert corrections on the methodological quality of polls are not effective in eliminating people’s biases. When they are effective, it tends to be only with highly educated respondents.

We might need a different approach in the context of social media. 

Specifically, we need design-level strategies to reduce misinformation and polarization. One possibility is a small change in the Twitter polls’ interface design: Twitter could place an interactive methodological warning label at the corner of each Twitter poll before and after it is posted. It might say something like “This poll is not scientific,” or a clickable box saying “This poll’s results are NOT systematic, representative and valid,” perhaps including a link to more detailed information elsewhere.

A more targeted approach might incorporate software which detects polls with political content, and then activates a warning banner once the poll is posted. This small interface change might even contribute to the general public’s polling literacy in the long term.

Similar design hacks to fight misinformation and polarization are increasingly being adopted on other platforms. Facebook started to flag fake news stories with its fact-checking partners. They’ve also recently updated their design to provide related articles, which scholars Bode and Vraga have found to be effective in reducing misinformation. The Center for Media Engagement found that the introduction of a “Respect” button in the online comments sections can reduce partisan incivility, which the Intercept recently adopted. Likewise, Twitter should consider, or at least pilot, a warning label for polls.

Ozan Kuru is a PhD Candidate at Communication Studies and a Rackham Predoctoral Fellow at the University of Michigan.

The post Why Twitter Polls Should Have a Warning Label appeared first on MediaShift.

]]>
http://mediashift.org/2018/01/twitter-add-warning-label-polling-feature/feed/ 2 149511
What The Failure of Election Predictions Could Mean for Media Trust and Data Journalism http://mediashift.org/2016/11/136541/ Tue, 29 Nov 2016 11:05:07 +0000 http://mediashift.org/?p=136541 The election marathon finally ended, but the scandals and disputes we have seen will probably linger for a while in the public consciousness. One specific problem will be the public credibility of data journalism and trust in the media: The pre-election conspiracies regarding the polls could be reinforced with what many considered to be an […]

The post What The Failure of Election Predictions Could Mean for Media Trust and Data Journalism appeared first on MediaShift.

]]>
The election marathon finally ended, but the scandals and disputes we have seen will probably linger for a while in the public consciousness. One specific problem will be the public credibility of data journalism and trust in the media: The pre-election conspiracies regarding the polls could be reinforced with what many considered to be an across-the-board prediction failure.

Whereas many polling post-mortems debated the issues that led to this prediction failure, the focus on the implications for public perceptions still deserves attention, especially given the persistence of Donald Trump’s “rigged polls” claim even a month after the election. Moreover, some features of current data journalism in 2016 could contribute to this public cynicism and mistrust of the media.

Before the election, we witnessed unprecedented levels of partisan framing over factual evidence on the performance of the candidates. Especially, the claims a rigged election, the misinterpretation of scientific methodological decisions such as oversampling, hidden Trump vote in “rigged” polls, and discussions on the low-quality online polls on who won the presidential debates were good examples of partisan dispute over factual evidence, and more broadly, post-truth and post-fact politics. It seemed like that all evidence showed a Clinton win, but some partisans discredited all this data-driven evidence.

These partisan biases were not so surprising. Echoing other research on perceptions of numerical evidence, our recent studies provide some systematic insight into these perceptual processes. In large national survey-experiments on public perceptions of polls, we found that motivated partisans could use methodological details of polls as fodder for discrediting them when their results were unfavorable, even when they are directly told by objective experts who praised polls with robust methodology and debunked poorly-conducted ones.

And yet, against all expectations and data showing otherwise, Donald Trump won the election. This was, overall, a prediction failure according to many experts and pundits. And yes, polls (and all other data) turned out to be somewhat “rigged,” although not due to intentional manipulation, but rather some methodological assumptions.

The Dangers of Diverse Data

Photo by Cory M. Grenier on Flickr and reused here with Creative Commons license.

Photo by Cory M. Grenier on Flickr and reused here with Creative Commons license.

Regardless of the reasons behind this prediction failure, the damage in public perceptions is already done and is grand. In a context where polls took a direct hit as being “rigged” by the winning candidate, this polling failure will reinforce pre-election conspiracies on rigged polls and exasperate the trust in the media at large.

Moreover, some features of today’s data journalism, which relies on constant methodological critique, transparency, and analytical demonstrations, could substantiate these public cynicism and motivational biases. For one, data provided in today’s news reporting is much more diverse: it includes traditional polls, polling aggregations/averages, forecasting models, google search-term analytics, automatic social media analytics and election prediction markets. These reports are much more dynamic, real-time, user-interactive, visual, analytical, and immersive (smartphone apps). Such diverse data is increasingly integrated in mainstream reporting, and not knowing their distinctions well might fuel cynicism for some of the public.

Second, these reports are far from being conclusive and fully objective, especially in how they are perceived. There are often multiple statistics: for example, poll results compete with each other and any policy-relevant number from one source is fact-checked and debunked by others. Of course, not all numerical evidence is sound and based on rigorous methodology, and some is better than others. In brief, there is much that emerges as counter-evidence, fact-checking, or outright questioning of one’s statistical claims.

Relatedly, this competitive news environment fuels heavily opinionated reporting, with columns and op-eds in which these diverse data and results are being critically evaluated. There are also frequent analytical demonstrations in outlets like Vox, Politico, NYT Upshot showing why different forecasting models show different results, how different researchers/pollsters can come up with completely different results using the same raw data, how weighting can change poll results depending on even a single respondent, how the margin of error is in fact greater than reported, how changes in results are just a mirage, and how election maps lie to us. There is even a discussion of open-source and reproducible data outlets which allow members of the public itself to access and analyze data themselves.

A Different Kind of Horse Race

Photo by Dan Howard and used under Creative Commons license

Photo by Dan Howard and used under Creative Commons license

Previous research has shown that the traditional horse race coverage of the elections, which focuses on the election prediction, vote share, and campaign strategies of the candidates while undermining policy coverage, could lead to a cynical public. Although later research has shown that these effects might be limited, today’s data-driven coverage could foster an information environment that might facilitate biases for some of the public, if not the all.

What we are seeing today is that statistical information itself is being critiqued at unprecedented rates, and this is done not just by discrediting the sources reporting the data, and coming up with alternative interpretations of the same results, but also by slowly walking the news readers through alternative analyses that lead to conflicting or more nuanced results.

These reports will succeed in informing and engaging news readers with sound data. But they could also confuse some readers. They might heighten a sense and an understanding that all data are volatile, moldable, and disputable; that they could be analyzed, fact-checked, and presented in different ways; and hence are amenable to different and/or more nuanced conclusions. The implications of this data-conscious public will be substantial for beliefs in specific reports and trust in media at large, and they should be studied carefully in the post-election period.

Ozan Kuru is a PhD Candidate in Communication Studies at the University of Michigan. The views expressed here are his own. Ozan studies communication of public opinion, and its psychological underpinnings by looking at public perceptions of and reactions to the coverage of public opinion. Ozan published in various academic journals, including Public Opinion Quarterly (forthcoming), Mobile Media and Communication, Computers in Human Behavior, and a chapter in Social Media and Politics: A New Way to participate in the Political Process. He has also blogged on Center for Political Studies of University of Michigan and Monkey Cage of Washington Post. His ongoing research projects have been funded by the NSF Time-sharing Experiments in Social Sciences Project and received the Doris Graber Best Student Paper Award at Midwest Association for Public Opinion Research 2016 Conference. The author’s cited research here are collaborations with Professors Josh Pasek and Michael Traugott.

The post What The Failure of Election Predictions Could Mean for Media Trust and Data Journalism appeared first on MediaShift.

]]>
136541