Sunday, August 11, 2019

Dissecting False Information: Some Know Truth Better Than They Admit

Disinformation: intentionally false or inaccurate information that is spread deliberately and intended to convince someone of untruth or lies

Misinformation: false information that is spread, regardless of whether there is intent to mislead, but research strongly suggests that essentially all politics-related information is intended to convince listeners of its truth

False information: information that is objectively false; it includes all disinformation and all misinformation

Expressive responding: intentional reporting of false assertions of trust in a partisan information source to signal support for a party or tribe instead of an intent to signal belief in the information the source reports; this signals a person’s party or tribe loyalty by claiming belief that false information is true even when the person knows it is false; people who respond expressively choose to misreport their beliefs to show support for their political group

Motivated reasoning: applying little or no critical assessment of information that confirms or reinforces existing beliefs, ideology and tribal or social identity, while applying critical assessment of information that contradicts or undermines existing beliefs, ideology and tribal or social identity making belief in such information difficult or impossible, even if the information is true; people tend to evaluate information that aligns with their views as more trustworthy and truthful, while tending to see misaligned information as not trustworthy and thus not truthful

An initial study based on data obtained from 400 participants, suggests two interesting findings about what influences how people respond to information related to politics.[1] The research was based on headlines about true political news stories that were asserted to have come from either the New York Times or Fox News. None of the stories were from either the NYT or Fox. Research participants were asked if they believed the headlines. The stories, all true, were selected based on prior research showing that people had a hard time telling if the stories were true or false.

People were assigned into two groups. The control group reported whether they believed the 16 stories they were shown was true or false. The treatment group was paid a bonus of $1.60 to correctly state whether 12 of the 16 stories were true or false.


The research was designed to try to determine the relative contribution of three different factors that could affect people’s trust in political information; (1) perceived institutional trustworthiness, e.g., NYT vs Fox News, (2) motivated reasoning, and (3) expressive responding. The researchers write in their article: “While these mechanisms are not exclusive, it is important to estimate their separate impact to not conflate a crisis in trust in the media with a rise in political expressive behavior.”

The first finding is that the source of the news or institutional trustworthiness, NYT or Fox, was not an influential factor (p > 0.05), which was not expected based on prior research. That data is summarized in the top and bottom right panels of figure 2 shown below. The institutional trustworthiness data obtained from the control and paid groups was reported as indicating a lower level of influence than expected from prior research: “The figure shows that participants from the left and right rated New York Times articles and Fox News articles as true at a similar rate (right panel).”



What was more important was information that tended to confirm or contradict existing ideology, beliefs and tribe identity (p < 0.0001). The $1.60 incentive to correctly assess true or false stories increased accuracy, but that failed to achieve statistical significance among left-leaning participants. That outcome puzzled the researchers who expected to see similar expressive responding results from left- and right-leaning participants.

The authors conclude with this:
There is some good news in our study: we show that the bias that is introduced by evaluating a politically aligned source may not be as severe as has been widely believed. We offer some bad news as well: there is a large gap in evaluating headline claims, depending on whether they align with a person’s politics. Worse, this gap is not significantly reduced even when the claims are made by a publisher that aligns with participant’s political views.

CAVEATS: This research must be taken with a grain of salt. It must be confirmed in a larger, follow-on replication study. The small influence of (i) institutional trustworthiness and the failure to see expressive reasoning in left-leaning participants contradict prior results. The point of this discussion is not to assert the validity of this study. Instead, the point is to show that (1) researchers are highly focused on trying to understand the deadly serious problem of false information influence on American politics, and (2) how tricky it is to tease apart the different cognitive and social factors that lead to false and irrational political beliefs and behaviors.[2]

Footnotes:
1. This research is preliminary. The published manuscript (free download here) has not been peer-reviewed. This research needs to be replicated and expanded on to confirm the results.

2. The researchers acknowledge possible sources of error in their study.
Our study is not without limitations. It is possible that the participant responses in our incentive treatment group do not present respondents’ truthful evaluations of the headlines, as we propose, but instead are their best guess of what the researchers might label as ‘true’ or ‘false.’ . . . . However, our post-experiment questionnaire and open-ended responses by participants did not provide any indication that such activity had taken place.

Second, our study was limited to a specific set of publishers and our choice may have affected the results’ generalizability. Of related, but lesser concern, is the potential effect of the specific articles we selected as our stimuli. We believe, however, that our selections were robust, as we relied on previous literature and pre-tests to arrive at balanced samples.

B&B orig: 2/20/19

No comments:

Post a Comment