“If you want the public’s opinion on anything — what to name your dog, who will win tonight’s game, which election issue people care most about — there’s no better place to get answers than on Twitter.”
This is how Twitter introduces its “Twitter Polls” feature. Twitter polls might be useful for entertainment and business, but when it comes to politics, it’s more complicated: Twitter polls are not scientific; they are not systematically conducted and therefore cannot represent public opinion. Yet surprisingly, many individuals – ordinary citizens, public officials and political leaders – treat Twitter polls as valid representation of public opinion. Whether they fail to recognize its unscientific nature or intentionally use it as a pseudo-scientific platform for promoting their views, the result is increased cacophony, misinformation and polarization in social media and beyond. Given these problems, Twitter should update its design by adding an interactive warning label, at least for politically relevant polls.
Taking Twitter Polls Seriously
Twitter polls have not been systematically studied so far, but I believe there is reason to be concerned. A cursory search of the keywords “Twitter polls” in Twitter shows countless political polls posted by ordinary citizens. Most of these posts promote users’ partisan views by getting and disseminating favorable poll results — unsurprising given that users have mostly like-minded followers on Twitter. For example, ProgressPolls is a Twitter account with 127,000 followers. ProgressPolls regularly posts polls on a range of political issues, prefaced with leading questions (see example below).
Such Twitter polls may seem harmless, but as my colleagues’ and my work shows, people give more credibility to favorable poll results, whether or not the poll is scientific. We can expect individuals to engage in such biased processing even more actively in the echo chambers of social media, where people vote, comment, re-tweet, and are exposed to Twitter poll results.
Research by various scholars, such as Tremayne and Dunwoody and Sundar and Kim, demonstrate that online user interactivity increases persuasion. In other words, the hands-on nature of Twitter polls provides more psychological involvement, and could further amplify people’s biases.
When public officials don’t get it
More worrying still is that some public officials use Twitter polls and claim that they are legitimate. President Trump tweeted a poll showing what he regarded as favorable presidential approval ratings and ignored what the systematic, traditional polls showed (he has attacked traditional polls as being “rigged”). Another example of official misuse of Twitter polls came from a UK police department in November, which was considering whether to use a controversial restraint device called a “spit hood” in arrest procedures. The Durham Constabulary set up a poll asking whether followers were in favor of the possibility. A Durham police spokeswoman told the Guardian, “We have a huge social media following and so it seems fitting that we ask for public opinion. A poll provides measurable results which can help to shape decisions.” The problem, of course, is that Twitter polls do not provide any such thing.
The credibility of traditional polls suffers as well. The ease with which users can manipulate Twitter polls —not to mention the appropriation of the term “poll” for this superficial gauging of public opinion — may lead individuals to question the validity of polling in general.
A warning in the age of the self-polling public
If any Twitter users are taking Twitter polls seriously, then journalists, academics, and social media companies need to take them seriously too.
Fortunately, there are already tools available for this: First, there is community fact-checking; ordinary social media users sometimes comment on Twitter polls highlighting their methodological problems. Second, journalists and pollsters have intervened to highlight how the pitfalls of Twitter polls, and should continue to do so.
But these expert corrections reach only a limited audience. Also, as research with my collaborators shows, expert corrections on the methodological quality of polls are not effective in eliminating people’s biases. When they are effective, it tends to be only with highly educated respondents.
We might need a different approach in the context of social media.
Specifically, we need design-level strategies to reduce misinformation and polarization. One possibility is a small change in the Twitter polls’ interface design: Twitter could place an interactive methodological warning label at the corner of each Twitter poll before and after it is posted. It might say something like “This poll is not scientific,” or a clickable box saying “This poll’s results are NOT systematic, representative and valid,” perhaps including a link to more detailed information elsewhere.
A more targeted approach might incorporate software which detects polls with political content, and then activates a warning banner once the poll is posted. This small interface change might even contribute to the general public’s polling literacy in the long term.
Similar design hacks to fight misinformation and polarization are increasingly being adopted on other platforms. Facebook started to flag fake news stories with its fact-checking partners. They’ve also recently updated their design to provide related articles, which scholars Bode and Vraga have found to be effective in reducing misinformation. The Center for Media Engagement found that the introduction of a “Respect” button in the online comments sections can reduce partisan incivility, which the Intercept recently adopted. Likewise, Twitter should consider, or at least pilot, a warning label for polls.
Ozan Kuru is a PhD Candidate at Communication Studies and a Rackham Predoctoral Fellow at the University of Michigan.